
As usual, let’s start with a simple Java example. Here is an amount of money I’m going to send to a user via, say, the PayPal API:
interface Money {
double cents();
}Now here I am, the method that sends the money:
void send(Money m) {
double c = m.cents();
// Send them over via the API...
}These two pieces of code are, as we call it, loosely coupled. The method send() has no idea which class is provided and how exactly the method cents() is implemented. Maybe it’s a simple constant object of one dollar:
class OneDollar implements Money {
@Override
double cents() {
return 100.0d;
}
}Or maybe it’s a way more complex entity that makes a network connection first, in order to fetch the current USD-to-EUR exchange rate, update the database, and then return the result of some calculation:
class EmployeeHourlyRate implements Money {
@Override
double cents() {
// Fetch the exchange rate;
// Update the database;
// Calculate the hourly rate;
// Return the value.
}
}The method send() doesn’t have the knowledge of what exactly is provided as its first argument. All it can do is hope that the method cents() will do the work right. What if it doesn’t?
If I’m a developer of the method send() and I’m fully prepared to take the blame for the mistakes my method causes, I do want to know what my collaborators are. And I want to be absolutely sure they work. Not just work, but work exactly how I expect them to. Preferably I would like to write them myself. Ideally I would like to ensure that nobody touches them after I implement them. You get the sarcasm, right?
This may sound like a joke, but I have heard this argument many times. They say that “it’s better to be completely sure two pieces work together, instead of relying on the damn polymorphism and then spending hours debugging something I didn’t write.” And they are right, you know. Polymorphism—when a seemingly primitive object of type Money does whatever it wants, including HTTP requests and SQL UPDATE queries—doesn’t add reliability to the entire application, does it?
No, it doesn’t.
Obviously, polymorphism makes the life of the developers of this type Money and its “ancestors” way simpler, since they don’t have to think about their users much. All they worry about is how to return the double when cents() is called. They don’t need to care about speed, potential exceptions, memory usage, and many other things, since the interface doesn’t require that. It only tells them to return the double and call it a day. Let somebody else worry about everything else. Easy, huh? But that’s a childish and egoistic way of thinking, you may say!
Yes, it is.
However…
You’ve most definitely heard of the Fail Fast idea, which, in a nutshell, claims that in order to make an application robust and stable we have to make sure its components are as fragile as possible and as vulnerable as they can be in response to any potential exceptional situation. They have to break whenever they can and let their users deal with the failures. With such a philosophy no object will assume anything good about its counterparts and will always try to escalate problems to higher levels, which eventually will hit the end user who will report them back to the team. The team will fix them all and the entire product will stabilize.
If the philosophy is the opposite and every object is trying to deal with problems on its individual micro level, the majority of exceptional situations will never be visible to users, testers, architects and programmers, who are supposed to be dealing with them and finding solutions for them. Thanks to this “careful” mindset of individual objects, the stability and robustness of the entire application will suffer.
We can apply the same logic to the “fear of loose coupling.”
When we worry about how Money.cents() works and want to control its behavior, we are doing ourselves and the entire project a big disservice. In the long run we destabilize the product, instead of making it more stable. Some even want to prohibit polymorphism by declaring method send() this way:
void send(EmployeeHourlyRate m) {
// Now I know that it's not some abstract Money,
// but a very specific class EmployeeHourlyRate, which
// was implemented by Bobby, a good friend of mine.
}Here we limit the amount of mistakes our code may have, since we know Bobby, we’ve seen his code, we know how it works and which exceptions to expect. We are safe. Yes, we are. For now. But strategically speaking, by not allowing our software to make all possible mistakes and throw all possible exceptions in all unusual situations, we are seriously limiting its ability to be properly tested and that’s why it’s destabilized.
As I mentioned earlier, the only way to increase the quality of software is to find and fix its bugs. The more bugs we fix, the fewer are the bugs that remain hidden and not-fixed-yet. A fear of bugs and our intention to prevent them is only shooting us in the foot.
Instead, we should let everybody, not only Bobby, implement Money and pass those implementations to send(). Yes, some of them will cause troubles and may even lead to UI-visible failures. But if our management understands the concept of software quality right, they will not blame us for mistakes. Instead, they will encourage us to find as many of them as possible, reproduce them with automated tests, fix, and re-deploy.
Thus, the fear of decoupling is nothing else but Fail Safe.
" />How often do you create interfaces for your classes? #elegantobjects #oop
— Yegor Bugayenko (@yegor256) September 30, 2018

As usual, let’s start with a simple Java example. Here is an amount of money I’m going to send to a user via, say, the PayPal API:
interface Money {
double cents();
}Now here I am, the method that sends the money:
void send(Money m) {
double c = m.cents();
// Send them over via the API...
}These two pieces of code are, as we call it, loosely coupled. The method send() has no idea which class is provided and how exactly the method cents() is implemented. Maybe it’s a simple constant object of one dollar:
class OneDollar implements Money {
@Override
double cents() {
return 100.0d;
}
}Or maybe it’s a way more complex entity that makes a network connection first, in order to fetch the current USD-to-EUR exchange rate, update the database, and then return the result of some calculation:
class EmployeeHourlyRate implements Money {
@Override
double cents() {
// Fetch the exchange rate;
// Update the database;
// Calculate the hourly rate;
// Return the value.
}
}The method send() doesn’t have the knowledge of what exactly is provided as its first argument. All it can do is hope that the method cents() will do the work right. What if it doesn’t?
If I’m a developer of the method send() and I’m fully prepared to take the blame for the mistakes my method causes, I do want to know what my collaborators are. And I want to be absolutely sure they work. Not just work, but work exactly how I expect them to. Preferably I would like to write them myself. Ideally I would like to ensure that nobody touches them after I implement them. You get the sarcasm, right?
This may sound like a joke, but I have heard this argument many times. They say that “it’s better to be completely sure two pieces work together, instead of relying on the damn polymorphism and then spending hours debugging something I didn’t write.” And they are right, you know. Polymorphism—when a seemingly primitive object of type Money does whatever it wants, including HTTP requests and SQL UPDATE queries—doesn’t add reliability to the entire application, does it?
No, it doesn’t.
Obviously, polymorphism makes the life of the developers of this type Money and its “ancestors” way simpler, since they don’t have to think about their users much. All they worry about is how to return the double when cents() is called. They don’t need to care about speed, potential exceptions, memory usage, and many other things, since the interface doesn’t require that. It only tells them to return the double and call it a day. Let somebody else worry about everything else. Easy, huh? But that’s a childish and egoistic way of thinking, you may say!
Yes, it is.
However…
You’ve most definitely heard of the Fail Fast idea, which, in a nutshell, claims that in order to make an application robust and stable we have to make sure its components are as fragile as possible and as vulnerable as they can be in response to any potential exceptional situation. They have to break whenever they can and let their users deal with the failures. With such a philosophy no object will assume anything good about its counterparts and will always try to escalate problems to higher levels, which eventually will hit the end user who will report them back to the team. The team will fix them all and the entire product will stabilize.
If the philosophy is the opposite and every object is trying to deal with problems on its individual micro level, the majority of exceptional situations will never be visible to users, testers, architects and programmers, who are supposed to be dealing with them and finding solutions for them. Thanks to this “careful” mindset of individual objects, the stability and robustness of the entire application will suffer.
We can apply the same logic to the “fear of loose coupling.”
When we worry about how Money.cents() works and want to control its behavior, we are doing ourselves and the entire project a big disservice. In the long run we destabilize the product, instead of making it more stable. Some even want to prohibit polymorphism by declaring method send() this way:
void send(EmployeeHourlyRate m) {
// Now I know that it's not some abstract Money,
// but a very specific class EmployeeHourlyRate, which
// was implemented by Bobby, a good friend of mine.
}Here we limit the amount of mistakes our code may have, since we know Bobby, we’ve seen his code, we know how it works and which exceptions to expect. We are safe. Yes, we are. For now. But strategically speaking, by not allowing our software to make all possible mistakes and throw all possible exceptions in all unusual situations, we are seriously limiting its ability to be properly tested and that’s why it’s destabilized.
As I mentioned earlier, the only way to increase the quality of software is to find and fix its bugs. The more bugs we fix, the fewer are the bugs that remain hidden and not-fixed-yet. A fear of bugs and our intention to prevent them is only shooting us in the foot.
Instead, we should let everybody, not only Bobby, implement Money and pass those implementations to send(). Yes, some of them will cause troubles and may even lead to UI-visible failures. But if our management understands the concept of software quality right, they will not blame us for mistakes. Instead, they will encourage us to find as many of them as possible, reproduce them with automated tests, fix, and re-deploy.
Thus, the fear of decoupling is nothing else but Fail Safe.
"/>How often do you create interfaces for your classes? #elegantobjects #oop
— Yegor Bugayenko (@yegor256) September 30, 2018
https://www.yegor256.com/2018/09/18/fear-of-coupling.html
Fear of Decoupling
- Moscow, Russia
- Yegor Bugayenko
- comments
Objects talk to each other via their methods. In mainstream programming languages, like Java or C#, an object may have a unique set of methods together with some methods it is forced to have because it implements certain types, also known as interfaces. My experience of speaking with many programmers tells me that most of us are pretty scared of objects that implement too many interface methods. We don’t want to deal with them since they are polymorphic and, because of that, unreliable. It’s a fair fear. Let’s try to analyze where it comes from.

As usual, let’s start with a simple Java example. Here is an amount of money I’m going to send to a user via, say, the PayPal API:
interface Money {
double cents();
}Now here I am, the method that sends the money:
void send(Money m) {
double c = m.cents();
// Send them over via the API...
}These two pieces of code are, as we call it, loosely coupled. The method send() has no idea which class is provided and how exactly the method cents() is implemented. Maybe it’s a simple constant object of one dollar:
class OneDollar implements Money {
@Override
double cents() {
return 100.0d;
}
}Or maybe it’s a way more complex entity that makes a network connection first, in order to fetch the current USD-to-EUR exchange rate, update the database, and then return the result of some calculation:
class EmployeeHourlyRate implements Money {
@Override
double cents() {
// Fetch the exchange rate;
// Update the database;
// Calculate the hourly rate;
// Return the value.
}
}The method send() doesn’t have the knowledge of what exactly is provided as its first argument. All it can do is hope that the method cents() will do the work right. What if it doesn’t?
If I’m a developer of the method send() and I’m fully prepared to take the blame for the mistakes my method causes, I do want to know what my collaborators are. And I want to be absolutely sure they work. Not just work, but work exactly how I expect them to. Preferably I would like to write them myself. Ideally I would like to ensure that nobody touches them after I implement them. You get the sarcasm, right?
This may sound like a joke, but I have heard this argument many times. They say that “it’s better to be completely sure two pieces work together, instead of relying on the damn polymorphism and then spending hours debugging something I didn’t write.” And they are right, you know. Polymorphism—when a seemingly primitive object of type Money does whatever it wants, including HTTP requests and SQL UPDATE queries—doesn’t add reliability to the entire application, does it?
No, it doesn’t.
Obviously, polymorphism makes the life of the developers of this type Money and its “ancestors” way simpler, since they don’t have to think about their users much. All they worry about is how to return the double when cents() is called. They don’t need to care about speed, potential exceptions, memory usage, and many other things, since the interface doesn’t require that. It only tells them to return the double and call it a day. Let somebody else worry about everything else. Easy, huh? But that’s a childish and egoistic way of thinking, you may say!
Yes, it is.
However…
You’ve most definitely heard of the Fail Fast idea, which, in a nutshell, claims that in order to make an application robust and stable we have to make sure its components are as fragile as possible and as vulnerable as they can be in response to any potential exceptional situation. They have to break whenever they can and let their users deal with the failures. With such a philosophy no object will assume anything good about its counterparts and will always try to escalate problems to higher levels, which eventually will hit the end user who will report them back to the team. The team will fix them all and the entire product will stabilize.
If the philosophy is the opposite and every object is trying to deal with problems on its individual micro level, the majority of exceptional situations will never be visible to users, testers, architects and programmers, who are supposed to be dealing with them and finding solutions for them. Thanks to this “careful” mindset of individual objects, the stability and robustness of the entire application will suffer.
We can apply the same logic to the “fear of loose coupling.”
When we worry about how Money.cents() works and want to control its behavior, we are doing ourselves and the entire project a big disservice. In the long run we destabilize the product, instead of making it more stable. Some even want to prohibit polymorphism by declaring method send() this way:
void send(EmployeeHourlyRate m) {
// Now I know that it's not some abstract Money,
// but a very specific class EmployeeHourlyRate, which
// was implemented by Bobby, a good friend of mine.
}Here we limit the amount of mistakes our code may have, since we know Bobby, we’ve seen his code, we know how it works and which exceptions to expect. We are safe. Yes, we are. For now. But strategically speaking, by not allowing our software to make all possible mistakes and throw all possible exceptions in all unusual situations, we are seriously limiting its ability to be properly tested and that’s why it’s destabilized.
As I mentioned earlier, the only way to increase the quality of software is to find and fix its bugs. The more bugs we fix, the fewer are the bugs that remain hidden and not-fixed-yet. A fear of bugs and our intention to prevent them is only shooting us in the foot.
Instead, we should let everybody, not only Bobby, implement Money and pass those implementations to send(). Yes, some of them will cause troubles and may even lead to UI-visible failures. But if our management understands the concept of software quality right, they will not blame us for mistakes. Instead, they will encourage us to find as many of them as possible, reproduce them with automated tests, fix, and re-deploy.
Thus, the fear of decoupling is nothing else but Fail Safe.
How often do you create interfaces for your classes? #elegantobjects #oop
— Yegor Bugayenko (@yegor256) September 30, 2018
Objects talk to each other via their methods. In mainstream programming languages, like Java or C#, an object may have a unique set of methods together with some methods it is forced to have because it implements certain types, also known as interfaces. My experience of speaking with many programmers tells me that most of us are pretty scared of objects that implement too many interface methods. We don’t want to deal with them since they are polymorphic and, because of that, unreliable. It’s a fair fear. Let’s try to analyze where it comes from.

As usual, let’s start with a simple Java example. Here is an amount of money I’m going to send to a user via, say, the PayPal API:
interface Money {
double cents();
}Now here I am, the method that sends the money:
void send(Money m) {
double c = m.cents();
// Send them over via the API...
}These two pieces of code are, as we call it, loosely coupled. The method send() has no idea which class is provided and how exactly the method cents() is implemented. Maybe it’s a simple constant object of one dollar:
class OneDollar implements Money {
@Override
double cents() {
return 100.0d;
}
}Or maybe it’s a way more complex entity that makes a network connection first, in order to fetch the current USD-to-EUR exchange rate, update the database, and then return the result of some calculation:
class EmployeeHourlyRate implements Money {
@Override
double cents() {
// Fetch the exchange rate;
// Update the database;
// Calculate the hourly rate;
// Return the value.
}
}The method send() doesn’t have the knowledge of what exactly is provided as its first argument. All it can do is hope that the method cents() will do the work right. What if it doesn’t?
If I’m a developer of the method send() and I’m fully prepared to take the blame for the mistakes my method causes, I do want to know what my collaborators are. And I want to be absolutely sure they work. Not just work, but work exactly how I expect them to. Preferably I would like to write them myself. Ideally I would like to ensure that nobody touches them after I implement them. You get the sarcasm, right?
This may sound like a joke, but I have heard this argument many times. They say that “it’s better to be completely sure two pieces work together, instead of relying on the damn polymorphism and then spending hours debugging something I didn’t write.” And they are right, you know. Polymorphism—when a seemingly primitive object of type Money does whatever it wants, including HTTP requests and SQL UPDATE queries—doesn’t add reliability to the entire application, does it?
No, it doesn’t.
Obviously, polymorphism makes the life of the developers of this type Money and its “ancestors” way simpler, since they don’t have to think about their users much. All they worry about is how to return the double when cents() is called. They don’t need to care about speed, potential exceptions, memory usage, and many other things, since the interface doesn’t require that. It only tells them to return the double and call it a day. Let somebody else worry about everything else. Easy, huh? But that’s a childish and egoistic way of thinking, you may say!
Yes, it is.
However…
You’ve most definitely heard of the Fail Fast idea, which, in a nutshell, claims that in order to make an application robust and stable we have to make sure its components are as fragile as possible and as vulnerable as they can be in response to any potential exceptional situation. They have to break whenever they can and let their users deal with the failures. With such a philosophy no object will assume anything good about its counterparts and will always try to escalate problems to higher levels, which eventually will hit the end user who will report them back to the team. The team will fix them all and the entire product will stabilize.
If the philosophy is the opposite and every object is trying to deal with problems on its individual micro level, the majority of exceptional situations will never be visible to users, testers, architects and programmers, who are supposed to be dealing with them and finding solutions for them. Thanks to this “careful” mindset of individual objects, the stability and robustness of the entire application will suffer.
We can apply the same logic to the “fear of loose coupling.”
When we worry about how Money.cents() works and want to control its behavior, we are doing ourselves and the entire project a big disservice. In the long run we destabilize the product, instead of making it more stable. Some even want to prohibit polymorphism by declaring method send() this way:
void send(EmployeeHourlyRate m) {
// Now I know that it's not some abstract Money,
// but a very specific class EmployeeHourlyRate, which
// was implemented by Bobby, a good friend of mine.
}Here we limit the amount of mistakes our code may have, since we know Bobby, we’ve seen his code, we know how it works and which exceptions to expect. We are safe. Yes, we are. For now. But strategically speaking, by not allowing our software to make all possible mistakes and throw all possible exceptions in all unusual situations, we are seriously limiting its ability to be properly tested and that’s why it’s destabilized.
As I mentioned earlier, the only way to increase the quality of software is to find and fix its bugs. The more bugs we fix, the fewer are the bugs that remain hidden and not-fixed-yet. A fear of bugs and our intention to prevent them is only shooting us in the foot.
Instead, we should let everybody, not only Bobby, implement Money and pass those implementations to send(). Yes, some of them will cause troubles and may even lead to UI-visible failures. But if our management understands the concept of software quality right, they will not blame us for mistakes. Instead, they will encourage us to find as many of them as possible, reproduce them with automated tests, fix, and re-deploy.
Thus, the fear of decoupling is nothing else but Fail Safe.
How often do you create interfaces for your classes? #elegantobjects #oop
— Yegor Bugayenko (@yegor256) September 30, 2018
Please, use syntax highlighting in your comments, to make them more readable.

First, I have to say that this idea is very similar to the one suggested by Bertrand Meyer in his book Object Oriented Software Construction, where he proposes we divide an object’s methods into two sharply separated categories: queries and commands.
The idea behind this principle is rather philosophical. Let’s start with builders, which are supposed to create or find an object and then return it. Suppose I have a store of books and I ask it to give me a book by name:
interface Bookshelf {
Book find(String title);
}It’s obviously a “builder” (or a “query” in Meyer’s terms). I ask for a book and it’s given to me. The problem, though, is with the name of the method. It’s called “find,” which implies that I know how the book will be dealt with. It will be found.
However, this is not how we should treat our objects. We must not tell them how to do the job we want them to do. Instead, we must let them decide whether the book will be found, constructed, or maybe taken from a memory cache. When we query, we have to say what result we are looking for and let the object make the decision about the way this result is going to be built. A much more appropriate name for this method would be book():
interface Bookshelf {
Book book(String title);
}The rule of thumb is: a builder is always a noun. If the method returns something, it has to be a noun. Preferably its name should explain what the method returns. If it’s a book, name it book(). If it’s a file, call the method file(), etc. Here are a few good builder examples:
interface Foo {
float speed(Actor actor);
Money salary(User user);
File database();
Date deadline(Project project, User user);
}Here, on the contrary, are a few examples of badly named builders:
interface Foo {
float calculateSpeed(Actor actor);
Money getSalary(User user);
File openDatabase();
Date readDeadline(Project project, User user);
}There is no place for a verb in a builder’s name!
It’s not only about the name, by the way. A builder, since its name doesn’t contain a verb, should not do any modifications to the encapsulated entities. It may only create or find something and return it. Just like a pure function, it must not have any side-effects.
Next, there are “manipulators” (or “commands” in Meyer’s terms). They do some work for us, modifying the entities, which the object encapsulates. They are the opposite to builders, because they actually make changes to the world abstracted by the object. For example, we ask the Bookshelf to add a new book to itself:
interface Bookshelf {
void add(Book book);
}The method adds the book to the storage. How exactly the storage will be modified, we don’t know. But we know that since the name of the method is a verb, there will be modifications.
Also, manipulators must not return anything. It’s always void that we see as the type of their response. This is needed mostly in order to separate the imperative part of the code from the declarative part. We either receive objects or tell them what to do. We must not mix those activities in one method.
The purpose of these rules is to make the code simpler. If you follow them, and all your builders only return objects and your manipulators only modify the world, the entire design will become easier to understand. Methods will be smaller and their names shorter.
Of course, very often you will have a hard time finding those names. From time to time you will want to return something from a manipulator or make your builder make some changes, say to the cache. Try to resist this temptation and stay with the principle: a method is either a builder or a manipulator, nothing in the middle. The examples above are rather primitive, the code in real life is much more complicated. But that’s what the principle is going to help us with—making the code simpler.
I’m also aware of the noun/verb principle, which suggests always naming classes as nouns and their methods as verbs. I believe it’s a wrong idea, since it doesn’t differentiate builders from manipulators and encourages us to always think in terms of imperative instructions. I believe that OOP must be much more about declarative composition of objects, even if we have to sometimes get them from other objects instead of instantiating them via constructors. That’s why we do need builders in most situations and we also have to see an obvious difference between them and the other methods, manipulators.
You can find a more detailed discussion of this problem in Elegant Objects, Volume 1, Section 2.4.
" />How would you name a method of a class Document that reads and returns its content? #elegantobjects
— Yegor Bugayenko (@yegor256) August 26, 2018

First, I have to say that this idea is very similar to the one suggested by Bertrand Meyer in his book Object Oriented Software Construction, where he proposes we divide an object’s methods into two sharply separated categories: queries and commands.
The idea behind this principle is rather philosophical. Let’s start with builders, which are supposed to create or find an object and then return it. Suppose I have a store of books and I ask it to give me a book by name:
interface Bookshelf {
Book find(String title);
}It’s obviously a “builder” (or a “query” in Meyer’s terms). I ask for a book and it’s given to me. The problem, though, is with the name of the method. It’s called “find,” which implies that I know how the book will be dealt with. It will be found.
However, this is not how we should treat our objects. We must not tell them how to do the job we want them to do. Instead, we must let them decide whether the book will be found, constructed, or maybe taken from a memory cache. When we query, we have to say what result we are looking for and let the object make the decision about the way this result is going to be built. A much more appropriate name for this method would be book():
interface Bookshelf {
Book book(String title);
}The rule of thumb is: a builder is always a noun. If the method returns something, it has to be a noun. Preferably its name should explain what the method returns. If it’s a book, name it book(). If it’s a file, call the method file(), etc. Here are a few good builder examples:
interface Foo {
float speed(Actor actor);
Money salary(User user);
File database();
Date deadline(Project project, User user);
}Here, on the contrary, are a few examples of badly named builders:
interface Foo {
float calculateSpeed(Actor actor);
Money getSalary(User user);
File openDatabase();
Date readDeadline(Project project, User user);
}There is no place for a verb in a builder’s name!
It’s not only about the name, by the way. A builder, since its name doesn’t contain a verb, should not do any modifications to the encapsulated entities. It may only create or find something and return it. Just like a pure function, it must not have any side-effects.
Next, there are “manipulators” (or “commands” in Meyer’s terms). They do some work for us, modifying the entities, which the object encapsulates. They are the opposite to builders, because they actually make changes to the world abstracted by the object. For example, we ask the Bookshelf to add a new book to itself:
interface Bookshelf {
void add(Book book);
}The method adds the book to the storage. How exactly the storage will be modified, we don’t know. But we know that since the name of the method is a verb, there will be modifications.
Also, manipulators must not return anything. It’s always void that we see as the type of their response. This is needed mostly in order to separate the imperative part of the code from the declarative part. We either receive objects or tell them what to do. We must not mix those activities in one method.
The purpose of these rules is to make the code simpler. If you follow them, and all your builders only return objects and your manipulators only modify the world, the entire design will become easier to understand. Methods will be smaller and their names shorter.
Of course, very often you will have a hard time finding those names. From time to time you will want to return something from a manipulator or make your builder make some changes, say to the cache. Try to resist this temptation and stay with the principle: a method is either a builder or a manipulator, nothing in the middle. The examples above are rather primitive, the code in real life is much more complicated. But that’s what the principle is going to help us with—making the code simpler.
I’m also aware of the noun/verb principle, which suggests always naming classes as nouns and their methods as verbs. I believe it’s a wrong idea, since it doesn’t differentiate builders from manipulators and encourages us to always think in terms of imperative instructions. I believe that OOP must be much more about declarative composition of objects, even if we have to sometimes get them from other objects instead of instantiating them via constructors. That’s why we do need builders in most situations and we also have to see an obvious difference between them and the other methods, manipulators.
You can find a more detailed discussion of this problem in Elegant Objects, Volume 1, Section 2.4.
"/>How would you name a method of a class Document that reads and returns its content? #elegantobjects
— Yegor Bugayenko (@yegor256) August 26, 2018
https://www.yegor256.com/2018/08/22/builders-and-manipulators.html
Builders and Manipulators
- Palo Alto, CA
- Yegor Bugayenko
- comments
Here is a simple principle for naming methods in OOP, which I’m trying to follow in my code: it’s a verb if it manipulates, it’s a noun if it builds. That’s it. Nothing in between. Methods like saveFile() or getTitle() don’t fit and must be renamed and refactored. Moreover, methods that “manipulate” must always return void, for example print() or save(). Let me explain.

First, I have to say that this idea is very similar to the one suggested by Bertrand Meyer in his book Object Oriented Software Construction, where he proposes we divide an object’s methods into two sharply separated categories: queries and commands.
The idea behind this principle is rather philosophical. Let’s start with builders, which are supposed to create or find an object and then return it. Suppose I have a store of books and I ask it to give me a book by name:
interface Bookshelf {
Book find(String title);
}It’s obviously a “builder” (or a “query” in Meyer’s terms). I ask for a book and it’s given to me. The problem, though, is with the name of the method. It’s called “find,” which implies that I know how the book will be dealt with. It will be found.
However, this is not how we should treat our objects. We must not tell them how to do the job we want them to do. Instead, we must let them decide whether the book will be found, constructed, or maybe taken from a memory cache. When we query, we have to say what result we are looking for and let the object make the decision about the way this result is going to be built. A much more appropriate name for this method would be book():
interface Bookshelf {
Book book(String title);
}The rule of thumb is: a builder is always a noun. If the method returns something, it has to be a noun. Preferably its name should explain what the method returns. If it’s a book, name it book(). If it’s a file, call the method file(), etc. Here are a few good builder examples:
interface Foo {
float speed(Actor actor);
Money salary(User user);
File database();
Date deadline(Project project, User user);
}Here, on the contrary, are a few examples of badly named builders:
interface Foo {
float calculateSpeed(Actor actor);
Money getSalary(User user);
File openDatabase();
Date readDeadline(Project project, User user);
}There is no place for a verb in a builder’s name!
It’s not only about the name, by the way. A builder, since its name doesn’t contain a verb, should not do any modifications to the encapsulated entities. It may only create or find something and return it. Just like a pure function, it must not have any side-effects.
Next, there are “manipulators” (or “commands” in Meyer’s terms). They do some work for us, modifying the entities, which the object encapsulates. They are the opposite to builders, because they actually make changes to the world abstracted by the object. For example, we ask the Bookshelf to add a new book to itself:
interface Bookshelf {
void add(Book book);
}The method adds the book to the storage. How exactly the storage will be modified, we don’t know. But we know that since the name of the method is a verb, there will be modifications.
Also, manipulators must not return anything. It’s always void that we see as the type of their response. This is needed mostly in order to separate the imperative part of the code from the declarative part. We either receive objects or tell them what to do. We must not mix those activities in one method.
The purpose of these rules is to make the code simpler. If you follow them, and all your builders only return objects and your manipulators only modify the world, the entire design will become easier to understand. Methods will be smaller and their names shorter.
Of course, very often you will have a hard time finding those names. From time to time you will want to return something from a manipulator or make your builder make some changes, say to the cache. Try to resist this temptation and stay with the principle: a method is either a builder or a manipulator, nothing in the middle. The examples above are rather primitive, the code in real life is much more complicated. But that’s what the principle is going to help us with—making the code simpler.
I’m also aware of the noun/verb principle, which suggests always naming classes as nouns and their methods as verbs. I believe it’s a wrong idea, since it doesn’t differentiate builders from manipulators and encourages us to always think in terms of imperative instructions. I believe that OOP must be much more about declarative composition of objects, even if we have to sometimes get them from other objects instead of instantiating them via constructors. That’s why we do need builders in most situations and we also have to see an obvious difference between them and the other methods, manipulators.
You can find a more detailed discussion of this problem in Elegant Objects, Volume 1, Section 2.4.
How would you name a method of a class Document that reads and returns its content? #elegantobjects
— Yegor Bugayenko (@yegor256) August 26, 2018
Here is a simple principle for naming methods in OOP, which I’m trying to follow in my code: it’s a verb if it manipulates, it’s a noun if it builds. That’s it. Nothing in between. Methods like saveFile() or getTitle() don’t fit and must be renamed and refactored. Moreover, methods that “manipulate” must always return void, for example print() or save(). Let me explain.

First, I have to say that this idea is very similar to the one suggested by Bertrand Meyer in his book Object Oriented Software Construction, where he proposes we divide an object’s methods into two sharply separated categories: queries and commands.
The idea behind this principle is rather philosophical. Let’s start with builders, which are supposed to create or find an object and then return it. Suppose I have a store of books and I ask it to give me a book by name:
interface Bookshelf {
Book find(String title);
}It’s obviously a “builder” (or a “query” in Meyer’s terms). I ask for a book and it’s given to me. The problem, though, is with the name of the method. It’s called “find,” which implies that I know how the book will be dealt with. It will be found.
However, this is not how we should treat our objects. We must not tell them how to do the job we want them to do. Instead, we must let them decide whether the book will be found, constructed, or maybe taken from a memory cache. When we query, we have to say what result we are looking for and let the object make the decision about the way this result is going to be built. A much more appropriate name for this method would be book():
interface Bookshelf {
Book book(String title);
}The rule of thumb is: a builder is always a noun. If the method returns something, it has to be a noun. Preferably its name should explain what the method returns. If it’s a book, name it book(). If it’s a file, call the method file(), etc. Here are a few good builder examples:
interface Foo {
float speed(Actor actor);
Money salary(User user);
File database();
Date deadline(Project project, User user);
}Here, on the contrary, are a few examples of badly named builders:
interface Foo {
float calculateSpeed(Actor actor);
Money getSalary(User user);
File openDatabase();
Date readDeadline(Project project, User user);
}There is no place for a verb in a builder’s name!
It’s not only about the name, by the way. A builder, since its name doesn’t contain a verb, should not do any modifications to the encapsulated entities. It may only create or find something and return it. Just like a pure function, it must not have any side-effects.
Next, there are “manipulators” (or “commands” in Meyer’s terms). They do some work for us, modifying the entities, which the object encapsulates. They are the opposite to builders, because they actually make changes to the world abstracted by the object. For example, we ask the Bookshelf to add a new book to itself:
interface Bookshelf {
void add(Book book);
}The method adds the book to the storage. How exactly the storage will be modified, we don’t know. But we know that since the name of the method is a verb, there will be modifications.
Also, manipulators must not return anything. It’s always void that we see as the type of their response. This is needed mostly in order to separate the imperative part of the code from the declarative part. We either receive objects or tell them what to do. We must not mix those activities in one method.
The purpose of these rules is to make the code simpler. If you follow them, and all your builders only return objects and your manipulators only modify the world, the entire design will become easier to understand. Methods will be smaller and their names shorter.
Of course, very often you will have a hard time finding those names. From time to time you will want to return something from a manipulator or make your builder make some changes, say to the cache. Try to resist this temptation and stay with the principle: a method is either a builder or a manipulator, nothing in the middle. The examples above are rather primitive, the code in real life is much more complicated. But that’s what the principle is going to help us with—making the code simpler.
I’m also aware of the noun/verb principle, which suggests always naming classes as nouns and their methods as verbs. I believe it’s a wrong idea, since it doesn’t differentiate builders from manipulators and encourages us to always think in terms of imperative instructions. I believe that OOP must be much more about declarative composition of objects, even if we have to sometimes get them from other objects instead of instantiating them via constructors. That’s why we do need builders in most situations and we also have to see an obvious difference between them and the other methods, manipulators.
You can find a more detailed discussion of this problem in Elegant Objects, Volume 1, Section 2.4.
How would you name a method of a class Document that reads and returns its content? #elegantobjects
— Yegor Bugayenko (@yegor256) August 26, 2018
Please, use syntax highlighting in your comments, to make them more readable.

I was recently writing a web front for Zold in Ruby, on top of Sinatra. This is how a web server starts according to their documentation:
App.start!Here start! is a static method of the App class, which you have to declare as a child of their default parent Sinatra::Base. To tell the app which TCP port to listen to you have to preconfigure it:
require 'sinatra/base'
class App < Sinatra::Base
get '/' do
'Hello, world!'
end
end
App.set(:port, 8080)
App.start!What do you do if you want to start two web servers? For the purpose of testing this may be a pretty logical requirement. For example, since Zold is a distributed network, it is necessary to test how a number of servers communicate to each other. I can’t do that! There is absolutely no way. Because Sinatra is designed with the assumption that only one server may exist in the entire application scope.
Can this really be fixed? Let’s take a look at their code. Class Sinatra::Base is essentially a Singleton, which is not supposed to have more than one instance. When we call App.set(:port, 8080), the value 8080 is saved into an attribute of a single instance. The number 8080 becomes available for all methods of Sinatra::Base, no matter what instance they are called from.
They are not using true Ruby global variables, I believe, because they know that they are bad. Why exactly they are bad and what the alternatives are—slipped through their fingers.
Technically speaking, their design is “globally scoped.” Sinatra::Base treats the entire application as its scope of visibility. No matter who calls it, everything is visible, including what was created in previous calls and in previously instantiated objects. This “class” is a giant bag of global variables.
Every global variable is a troublemaker of that kind. While the application is small and its test coverage is low, global variables may not hurt. But the bigger the app and the more sophisticated its automated testing scenarios, the more difficult it will be to compose objects which depend on global variables, singletons, or class variables.
My recommendation? Under no circumstances even think about any global variables.
" /> global variables are evil. It started in 1973 when W. Wulf et al. claimed that “the non-local variable is a major contributing factor in programs which are difficult to understand.” Since then, many other reasons where suggested to convince programmers to stop using global variables. I think I read them all, but didn’t find the one that bothers me most of all: composability. In a nutshell, global variables make code difficult or impossible to compose in ways which are different from what its original author expected.What do you think about global variables? #elegantobjects #oop
--- Yegor Bugayenko (@yegor256) July 15, 2018

I was recently writing a web front for Zold in Ruby, on top of Sinatra. This is how a web server starts according to their documentation:
App.start!Here start! is a static method of the App class, which you have to declare as a child of their default parent Sinatra::Base. To tell the app which TCP port to listen to you have to preconfigure it:
require 'sinatra/base'
class App < Sinatra::Base
get '/' do
'Hello, world!'
end
end
App.set(:port, 8080)
App.start!What do you do if you want to start two web servers? For the purpose of testing this may be a pretty logical requirement. For example, since Zold is a distributed network, it is necessary to test how a number of servers communicate to each other. I can’t do that! There is absolutely no way. Because Sinatra is designed with the assumption that only one server may exist in the entire application scope.
Can this really be fixed? Let’s take a look at their code. Class Sinatra::Base is essentially a Singleton, which is not supposed to have more than one instance. When we call App.set(:port, 8080), the value 8080 is saved into an attribute of a single instance. The number 8080 becomes available for all methods of Sinatra::Base, no matter what instance they are called from.
They are not using true Ruby global variables, I believe, because they know that they are bad. Why exactly they are bad and what the alternatives are—slipped through their fingers.
Technically speaking, their design is “globally scoped.” Sinatra::Base treats the entire application as its scope of visibility. No matter who calls it, everything is visible, including what was created in previous calls and in previously instantiated objects. This “class” is a giant bag of global variables.
Every global variable is a troublemaker of that kind. While the application is small and its test coverage is low, global variables may not hurt. But the bigger the app and the more sophisticated its automated testing scenarios, the more difficult it will be to compose objects which depend on global variables, singletons, or class variables.
My recommendation? Under no circumstances even think about any global variables.
"/>What do you think about global variables? #elegantobjects #oop
--- Yegor Bugayenko (@yegor256) July 15, 2018
https://www.yegor256.com/2018/07/03/global-variables.html
What's Wrong With Global Variables?
- Moscow, Russia
- Yegor Bugayenko
- comments
Only lazy people haven’t written already about how global variables are evil. It started in 1973 when W. Wulf et al. claimed that “the non-local variable is a major contributing factor in programs which are difficult to understand.” Since then, many other reasons where suggested to convince programmers to stop using global variables. I think I read them all, but didn’t find the one that bothers me most of all: composability. In a nutshell, global variables make code difficult or impossible to compose in ways which are different from what its original author expected.

I was recently writing a web front for Zold in Ruby, on top of Sinatra. This is how a web server starts according to their documentation:
App.start!Here start! is a static method of the App class, which you have to declare as a child of their default parent Sinatra::Base. To tell the app which TCP port to listen to you have to preconfigure it:
require 'sinatra/base'
class App < Sinatra::Base
get '/' do
'Hello, world!'
end
end
App.set(:port, 8080)
App.start!What do you do if you want to start two web servers? For the purpose of testing this may be a pretty logical requirement. For example, since Zold is a distributed network, it is necessary to test how a number of servers communicate to each other. I can’t do that! There is absolutely no way. Because Sinatra is designed with the assumption that only one server may exist in the entire application scope.
Can this really be fixed? Let’s take a look at their code. Class Sinatra::Base is essentially a Singleton, which is not supposed to have more than one instance. When we call App.set(:port, 8080), the value 8080 is saved into an attribute of a single instance. The number 8080 becomes available for all methods of Sinatra::Base, no matter what instance they are called from.
They are not using true Ruby global variables, I believe, because they know that they are bad. Why exactly they are bad and what the alternatives are—slipped through their fingers.
Technically speaking, their design is “globally scoped.” Sinatra::Base treats the entire application as its scope of visibility. No matter who calls it, everything is visible, including what was created in previous calls and in previously instantiated objects. This “class” is a giant bag of global variables.
Every global variable is a troublemaker of that kind. While the application is small and its test coverage is low, global variables may not hurt. But the bigger the app and the more sophisticated its automated testing scenarios, the more difficult it will be to compose objects which depend on global variables, singletons, or class variables.
My recommendation? Under no circumstances even think about any global variables.
What do you think about global variables? #elegantobjects #oop
--- Yegor Bugayenko (@yegor256) July 15, 2018
Only lazy people haven’t written already about how global variables are evil. It started in 1973 when W. Wulf et al. claimed that “the non-local variable is a major contributing factor in programs which are difficult to understand.” Since then, many other reasons where suggested to convince programmers to stop using global variables. I think I read them all, but didn’t find the one that bothers me most of all: composability. In a nutshell, global variables make code difficult or impossible to compose in ways which are different from what its original author expected.

I was recently writing a web front for Zold in Ruby, on top of Sinatra. This is how a web server starts according to their documentation:
App.start!Here start! is a static method of the App class, which you have to declare as a child of their default parent Sinatra::Base. To tell the app which TCP port to listen to you have to preconfigure it:
require 'sinatra/base'
class App < Sinatra::Base
get '/' do
'Hello, world!'
end
end
App.set(:port, 8080)
App.start!What do you do if you want to start two web servers? For the purpose of testing this may be a pretty logical requirement. For example, since Zold is a distributed network, it is necessary to test how a number of servers communicate to each other. I can’t do that! There is absolutely no way. Because Sinatra is designed with the assumption that only one server may exist in the entire application scope.
Can this really be fixed? Let’s take a look at their code. Class Sinatra::Base is essentially a Singleton, which is not supposed to have more than one instance. When we call App.set(:port, 8080), the value 8080 is saved into an attribute of a single instance. The number 8080 becomes available for all methods of Sinatra::Base, no matter what instance they are called from.
They are not using true Ruby global variables, I believe, because they know that they are bad. Why exactly they are bad and what the alternatives are—slipped through their fingers.
Technically speaking, their design is “globally scoped.” Sinatra::Base treats the entire application as its scope of visibility. No matter who calls it, everything is visible, including what was created in previous calls and in previously instantiated objects. This “class” is a giant bag of global variables.
Every global variable is a troublemaker of that kind. While the application is small and its test coverage is low, global variables may not hurt. But the bigger the app and the more sophisticated its automated testing scenarios, the more difficult it will be to compose objects which depend on global variables, singletons, or class variables.
My recommendation? Under no circumstances even think about any global variables.
What do you think about global variables? #elegantobjects #oop
--- Yegor Bugayenko (@yegor256) July 15, 2018
Please, use syntax highlighting in your comments, to make them more readable.

Let’s start with this Ruby code:
class Users {
def initialize(file)
@file = file
end
def names
File.readlines(@file).reject(&:empty?)
end
}We can use it to read a list of users from a file:
Users.new('all-users.txt').namesThere are a number of ways to abuse this class:
Pass
nilto the ctor instead of a file name;Pass something else, which is not
String;Pass a file that doesn’t exist;
Pass a directory instead of a file.
Do you see the difference between these four mistakes we can make? Let’s see how our class can protect itself from each of them:
class Users {
def initialize(file)
raise "File name can't be nil" if file.nil?
raise 'Name must be a String' unless file.is_a?(String)
@file = file
end
def names
raise "#{@file} is absent" unless File.exist?(@file)
raise "#{@file} is not a file" unless File.file?(@file)
File.readlines(@file).reject(&:empty?)
end
}The first two potential mistakes were filtered out in the constructor, while the other two—later, in the method. Why did I do it this way? Why not put all of them into the constructor?
Because the first two compromise object state, while with the other two—its runtime behavior. You remember that an object is a representative of a set of other objects it encapsulates, called attributes. The object of class Users can’t represent nil or a number. It can only represent a file with a name of type String. On the other hand, what that file contains and whether it really is a file—doesn’t make the state invalid. It only causes trouble for the behavior.
Even though the difference may look subtle, it’s obvious. There are two phases of interaction with the encapsulated object: connecting and talking.
First, we encapsulate the file and want to be sure that it really is a file. We are not yet talking to it, we don’t want it to work for us yet, we just want to make sure it really is an object that we will be able to talk to in the near future. If it’s nil or a float, we will have problems in the future, for sure. That’s why we raise an exception from the constructor.
Then the second phase is talking, where we delegate control to the object and expect it to behave correctly. At this phase we may have other validation procedures, in order to make sure our interaction will go smoothly. It’s important to mention that these validations are very situational. We may call names() multiple times and every time have a different situation with the file on disc. To begin with it may not exist, while in a few seconds it will be ready and available for reading.
Ideally, a programming language should provide instruments for the first type of validations, for example with strict typing. In Java, for example, we would not need to check the type of file, the compiler would catch that error earlier. In Kotlin we would be able to get rid of the NULL check, thanks to their Null Safety feature. Ruby is less powerful than those languages, that’s why we have to validate “manually.”
Thus, to summarize, validating in constructors is not a bad idea, provided the validations are not touching the objects but only confirm that they are good enough to work with later.
" /> said earlier that constructors must be code-free and do nothing aside from attribute initialization. Since then, the most frequently asked question is: What about validation of arguments? If they are “broken,” what is the point of creating an object in an “invalid” state? Such an object will fail later, at an unexpected moment. Isn’t it better to throw an exception at the very moment of instantiation? To fail fast, so to speak? Here is what I think.
Let’s start with this Ruby code:
class Users {
def initialize(file)
@file = file
end
def names
File.readlines(@file).reject(&:empty?)
end
}We can use it to read a list of users from a file:
Users.new('all-users.txt').namesThere are a number of ways to abuse this class:
Pass
nilto the ctor instead of a file name;Pass something else, which is not
String;Pass a file that doesn’t exist;
Pass a directory instead of a file.
Do you see the difference between these four mistakes we can make? Let’s see how our class can protect itself from each of them:
class Users {
def initialize(file)
raise "File name can't be nil" if file.nil?
raise 'Name must be a String' unless file.is_a?(String)
@file = file
end
def names
raise "#{@file} is absent" unless File.exist?(@file)
raise "#{@file} is not a file" unless File.file?(@file)
File.readlines(@file).reject(&:empty?)
end
}The first two potential mistakes were filtered out in the constructor, while the other two—later, in the method. Why did I do it this way? Why not put all of them into the constructor?
Because the first two compromise object state, while with the other two—its runtime behavior. You remember that an object is a representative of a set of other objects it encapsulates, called attributes. The object of class Users can’t represent nil or a number. It can only represent a file with a name of type String. On the other hand, what that file contains and whether it really is a file—doesn’t make the state invalid. It only causes trouble for the behavior.
Even though the difference may look subtle, it’s obvious. There are two phases of interaction with the encapsulated object: connecting and talking.
First, we encapsulate the file and want to be sure that it really is a file. We are not yet talking to it, we don’t want it to work for us yet, we just want to make sure it really is an object that we will be able to talk to in the near future. If it’s nil or a float, we will have problems in the future, for sure. That’s why we raise an exception from the constructor.
Then the second phase is talking, where we delegate control to the object and expect it to behave correctly. At this phase we may have other validation procedures, in order to make sure our interaction will go smoothly. It’s important to mention that these validations are very situational. We may call names() multiple times and every time have a different situation with the file on disc. To begin with it may not exist, while in a few seconds it will be ready and available for reading.
Ideally, a programming language should provide instruments for the first type of validations, for example with strict typing. In Java, for example, we would not need to check the type of file, the compiler would catch that error earlier. In Kotlin we would be able to get rid of the NULL check, thanks to their Null Safety feature. Ruby is less powerful than those languages, that’s why we have to validate “manually.”
Thus, to summarize, validating in constructors is not a bad idea, provided the validations are not touching the objects but only confirm that they are good enough to work with later.
"/>
https://www.yegor256.com/2018/05/29/object-validation.html
Object Validation: to Defer or Not?
- Moscow, Russia
- Yegor Bugayenko
- comments
I said earlier that constructors must be code-free and do nothing aside from attribute initialization. Since then, the most frequently asked question is: What about validation of arguments? If they are “broken,” what is the point of creating an object in an “invalid” state? Such an object will fail later, at an unexpected moment. Isn’t it better to throw an exception at the very moment of instantiation? To fail fast, so to speak? Here is what I think.

Let’s start with this Ruby code:
class Users {
def initialize(file)
@file = file
end
def names
File.readlines(@file).reject(&:empty?)
end
}We can use it to read a list of users from a file:
Users.new('all-users.txt').namesThere are a number of ways to abuse this class:
Pass
nilto the ctor instead of a file name;Pass something else, which is not
String;Pass a file that doesn’t exist;
Pass a directory instead of a file.
Do you see the difference between these four mistakes we can make? Let’s see how our class can protect itself from each of them:
class Users {
def initialize(file)
raise "File name can't be nil" if file.nil?
raise 'Name must be a String' unless file.is_a?(String)
@file = file
end
def names
raise "#{@file} is absent" unless File.exist?(@file)
raise "#{@file} is not a file" unless File.file?(@file)
File.readlines(@file).reject(&:empty?)
end
}The first two potential mistakes were filtered out in the constructor, while the other two—later, in the method. Why did I do it this way? Why not put all of them into the constructor?
Because the first two compromise object state, while with the other two—its runtime behavior. You remember that an object is a representative of a set of other objects it encapsulates, called attributes. The object of class Users can’t represent nil or a number. It can only represent a file with a name of type String. On the other hand, what that file contains and whether it really is a file—doesn’t make the state invalid. It only causes trouble for the behavior.
Even though the difference may look subtle, it’s obvious. There are two phases of interaction with the encapsulated object: connecting and talking.
First, we encapsulate the file and want to be sure that it really is a file. We are not yet talking to it, we don’t want it to work for us yet, we just want to make sure it really is an object that we will be able to talk to in the near future. If it’s nil or a float, we will have problems in the future, for sure. That’s why we raise an exception from the constructor.
Then the second phase is talking, where we delegate control to the object and expect it to behave correctly. At this phase we may have other validation procedures, in order to make sure our interaction will go smoothly. It’s important to mention that these validations are very situational. We may call names() multiple times and every time have a different situation with the file on disc. To begin with it may not exist, while in a few seconds it will be ready and available for reading.
Ideally, a programming language should provide instruments for the first type of validations, for example with strict typing. In Java, for example, we would not need to check the type of file, the compiler would catch that error earlier. In Kotlin we would be able to get rid of the NULL check, thanks to their Null Safety feature. Ruby is less powerful than those languages, that’s why we have to validate “manually.”
Thus, to summarize, validating in constructors is not a bad idea, provided the validations are not touching the objects but only confirm that they are good enough to work with later.
I said earlier that constructors must be code-free and do nothing aside from attribute initialization. Since then, the most frequently asked question is: What about validation of arguments? If they are “broken,” what is the point of creating an object in an “invalid” state? Such an object will fail later, at an unexpected moment. Isn’t it better to throw an exception at the very moment of instantiation? To fail fast, so to speak? Here is what I think.

Let’s start with this Ruby code:
class Users {
def initialize(file)
@file = file
end
def names
File.readlines(@file).reject(&:empty?)
end
}We can use it to read a list of users from a file:
Users.new('all-users.txt').namesThere are a number of ways to abuse this class:
Pass
nilto the ctor instead of a file name;Pass something else, which is not
String;Pass a file that doesn’t exist;
Pass a directory instead of a file.
Do you see the difference between these four mistakes we can make? Let’s see how our class can protect itself from each of them:
class Users {
def initialize(file)
raise "File name can't be nil" if file.nil?
raise 'Name must be a String' unless file.is_a?(String)
@file = file
end
def names
raise "#{@file} is absent" unless File.exist?(@file)
raise "#{@file} is not a file" unless File.file?(@file)
File.readlines(@file).reject(&:empty?)
end
}The first two potential mistakes were filtered out in the constructor, while the other two—later, in the method. Why did I do it this way? Why not put all of them into the constructor?
Because the first two compromise object state, while with the other two—its runtime behavior. You remember that an object is a representative of a set of other objects it encapsulates, called attributes. The object of class Users can’t represent nil or a number. It can only represent a file with a name of type String. On the other hand, what that file contains and whether it really is a file—doesn’t make the state invalid. It only causes trouble for the behavior.
Even though the difference may look subtle, it’s obvious. There are two phases of interaction with the encapsulated object: connecting and talking.
First, we encapsulate the file and want to be sure that it really is a file. We are not yet talking to it, we don’t want it to work for us yet, we just want to make sure it really is an object that we will be able to talk to in the near future. If it’s nil or a float, we will have problems in the future, for sure. That’s why we raise an exception from the constructor.
Then the second phase is talking, where we delegate control to the object and expect it to behave correctly. At this phase we may have other validation procedures, in order to make sure our interaction will go smoothly. It’s important to mention that these validations are very situational. We may call names() multiple times and every time have a different situation with the file on disc. To begin with it may not exist, while in a few seconds it will be ready and available for reading.
Ideally, a programming language should provide instruments for the first type of validations, for example with strict typing. In Java, for example, we would not need to check the type of file, the compiler would catch that error earlier. In Kotlin we would be able to get rid of the NULL check, thanks to their Null Safety feature. Ruby is less powerful than those languages, that’s why we have to validate “manually.”
Thus, to summarize, validating in constructors is not a bad idea, provided the validations are not touching the objects but only confirm that they are good enough to work with later.
Please, use syntax highlighting in your comments, to make them more readable.

Look at this code:
Integer max(List<Integer> items) {
// Calculate the maximum of all
// items and return it.
}What should this method do if the list is empty? Java’s Collections.max() throws an exception. Ruby’s Enumerable.max() returns nil. PHP’s max() returns FALSE. Python’s max() raises an exception. C#’s Enumerable.Max() also throws an exception. JavaScript’s Math.max() returns NaN.
Which is the right way, huh? An exception, NULL, false or NaN?
An exception, if you ask me.
But there is yet another approach, which is better than an exception. This one:
Integer max(List<Integer> items, Integer def) {
// Calculate the maximum of all
// items and return it. Returns 'def' if the
// list is empty.
}The “default” object will be returned if the list is empty. This feature is implemented in Python’s max() function: it’s possible to pass both a list and a default element to return in case the list is empty. If the default element is not provided, the exception will be raised.
" /> evil. In OOP, your method can return NULL, it can accept NULL as an argument, your object can encapsulate it as an attribute, or you can assign it to a variable. All four scenarios are bad for the maintainability of your code—there are no doubts about that. The question is what to do instead. Let’s discuss the “return it” part and I will suggest one more “best practice” on top of what was discussed a few years ago.Say, you are designing a method findUserByName(), which has to find a user in the database. What would you return if nothing is found? #elegantobjects
--- Yegor Bugayenko (@yegor256) April 29, 2018

Look at this code:
Integer max(List<Integer> items) {
// Calculate the maximum of all
// items and return it.
}What should this method do if the list is empty? Java’s Collections.max() throws an exception. Ruby’s Enumerable.max() returns nil. PHP’s max() returns FALSE. Python’s max() raises an exception. C#’s Enumerable.Max() also throws an exception. JavaScript’s Math.max() returns NaN.
Which is the right way, huh? An exception, NULL, false or NaN?
An exception, if you ask me.
But there is yet another approach, which is better than an exception. This one:
Integer max(List<Integer> items, Integer def) {
// Calculate the maximum of all
// items and return it. Returns 'def' if the
// list is empty.
}The “default” object will be returned if the list is empty. This feature is implemented in Python’s max() function: it’s possible to pass both a list and a default element to return in case the list is empty. If the default element is not provided, the exception will be raised.
"/>Say, you are designing a method findUserByName(), which has to find a user in the database. What would you return if nothing is found? #elegantobjects
--- Yegor Bugayenko (@yegor256) April 29, 2018
https://www.yegor256.com/2018/05/22/default-arguments-against-null.html
One More Recipe Against NULL
- Moscow, Russia
- Yegor Bugayenko
- comments
You know what NULL is, right? It’s evil. In OOP, your method can return NULL, it can accept NULL as an argument, your object can encapsulate it as an attribute, or you can assign it to a variable. All four scenarios are bad for the maintainability of your code—there are no doubts about that. The question is what to do instead. Let’s discuss the “return it” part and I will suggest one more “best practice” on top of what was discussed a few years ago.

Look at this code:
Integer max(List<Integer> items) {
// Calculate the maximum of all
// items and return it.
}What should this method do if the list is empty? Java’s Collections.max() throws an exception. Ruby’s Enumerable.max() returns nil. PHP’s max() returns FALSE. Python’s max() raises an exception. C#’s Enumerable.Max() also throws an exception. JavaScript’s Math.max() returns NaN.
Which is the right way, huh? An exception, NULL, false or NaN?
An exception, if you ask me.
But there is yet another approach, which is better than an exception. This one:
Integer max(List<Integer> items, Integer def) {
// Calculate the maximum of all
// items and return it. Returns 'def' if the
// list is empty.
}The “default” object will be returned if the list is empty. This feature is implemented in Python’s max() function: it’s possible to pass both a list and a default element to return in case the list is empty. If the default element is not provided, the exception will be raised.
Say, you are designing a method findUserByName(), which has to find a user in the database. What would you return if nothing is found? #elegantobjects
--- Yegor Bugayenko (@yegor256) April 29, 2018
You know what NULL is, right? It’s evil. In OOP, your method can return NULL, it can accept NULL as an argument, your object can encapsulate it as an attribute, or you can assign it to a variable. All four scenarios are bad for the maintainability of your code—there are no doubts about that. The question is what to do instead. Let’s discuss the “return it” part and I will suggest one more “best practice” on top of what was discussed a few years ago.

Look at this code:
Integer max(List<Integer> items) {
// Calculate the maximum of all
// items and return it.
}What should this method do if the list is empty? Java’s Collections.max() throws an exception. Ruby’s Enumerable.max() returns nil. PHP’s max() returns FALSE. Python’s max() raises an exception. C#’s Enumerable.Max() also throws an exception. JavaScript’s Math.max() returns NaN.
Which is the right way, huh? An exception, NULL, false or NaN?
An exception, if you ask me.
But there is yet another approach, which is better than an exception. This one:
Integer max(List<Integer> items, Integer def) {
// Calculate the maximum of all
// items and return it. Returns 'def' if the
// list is empty.
}The “default” object will be returned if the list is empty. This feature is implemented in Python’s max() function: it’s possible to pass both a list and a default element to return in case the list is empty. If the default element is not provided, the exception will be raised.
Say, you are designing a method findUserByName(), which has to find a user in the database. What would you return if nothing is found? #elegantobjects
--- Yegor Bugayenko (@yegor256) April 29, 2018
Please, use syntax highlighting in your comments, to make them more readable.

Let’s take my own library jcabi-http, which I created a few years ago, when I thought that fluent interfaces were a good thing. Here is how you use the library to make an HTTP request and validate its output:
String html = new JdkRequest("https://www.google.com")
.method("GET")
.fetch()
.as(RestResponse.class)
.assertStatus(200)
.body();This convenient method chaining makes the code short and obvious, right? Yes, it does, on the surface. But the internal design of the library’s classes, including JdkRequest, which is the one you see, is very far from being elegant. The biggest problem is that they are rather big and it’s difficult impossible to extend them without making them even bigger.
For example, right now JdkRequest has the methods method(), fetch(), and a few others. What happens when new functionality is required? The only way to add to it would be to make the class bigger, by adding new methods, which is how we jeopardize its maintainability. Here, for example, we added multipartBody() and here we added timeout().
I always feel scared when I get a new feature request in jcabi-http. I understand that it most probably means adding new methods to Request, Response, and other already bloated interfaces and classes.
I actually tried to do something in the library in order to solve this problem but it wasn’t easy. Look at this .as(RestResponse.class) method call. What it does is decorate a Response with RestResponse, in order to make it method-richer. I just didn’t want to make Response contain 50+ methods, like many other libraries do. Here is what it does (this is pseudo-code):
class Response {
RestResponse as() {
return new RestResponse(this);
}
// Seven methods
}
class RestResponse implements Response {
private final Response origin;
// Original seven methods from Response
// Additional 14 methods
}As you see, instead of adding all possible methods to Response I placed them in supplementary decorators RestResponse, JsonResponse, XmlResponse, and others. It helps, but in order to write these decorators with the central object of type Response we have to use that “ugly” method as(), which depends heavily on Reflection and type casting.
In other words, fluent interfaces mean large classes or some ugly workarounds. I mentioned this problem earlier, when I wrote about Streams API and the interface Stream, which is perfectly fluent. There are 43 methods!
That is the biggest problem with fluent interfaces—they force objects to be huge.
Fluent interfaces are perfect for their users, since all methods are in one place and the amount of classes is very small. It is easy to use them, especially with code auto-completion in most IDEs. They also make client code more readable, since “fluent” constructs look similar to plain English (aka DSL).
That is all true! However, the damage they cause to object design is the price, which is too high.
What is the alternative?
I would recommend you use decorators and smart objects instead. Here is how I would design jcabi-http, if I could do it now:
String html = new BodyOfResponse(
new ResponseAssertStatus(
new RequestWithMethod(
new JdkRequest("https://www.google.com"),
"GET"
),
200
)
).toString();This is the same code as in the first snippet above, but it is much more object-oriented. The obvious problem with this code, of course, is that the IDE won’t be able to auto-complete almost anything. Also, we will have to remember many of the names of the classes. And the construct looks rather difficult to read for those who are used to fluent interfaces. In addition, it’s very far away from the DSL idea.
But here is the list of benefits. First, each object is small, very cohesive and they are all loosely coupled—which are obvious merits in OOP. Second, adding new functionality to the library is as easy as creating a new class; no need to touch existing classes. Third, unit testing is simplified, since classes are small. Fourth, all classes can be immutable, which is also an obvious merit in OOP.
Thus, there seems to be a conflict between usefulness and maintainability. Fluent interfaces are good for users, but bad for library developers. Small objects are good for developers, but difficult to understand and use.
It seems to be so, but only if you are used to large classes and procedural programming. To me, a large amount of small classes seems to be an advantage, not a drawback. Libraries that are clear, simple, and readable inside are much easier to use, even when I don’t know exactly which classes out there are the most suitable for me. Even without the code-auto-complete I can figure it out myself, because the code is clean.
Also, I very often find myself interested in extending existing functionality either inside my code base or via a pull request to the library. I am much more interested to do that if I know that the changes I introduce are isolated and easy to test.
Thus, no fluent interfaces anymore from me, only objects and decorators.
" /> Fluent interface, first coined as a term by Martin Fowler, is a very convenient way of communicating with objects in OOP. It makes their facades easier to use and understand. However, it ruins their internal design, making them more difficult to maintain. A few words were said about that by Marco Pivetta in his blog post Fluent Interfaces are Evil; now I will add my few cents.
Let’s take my own library jcabi-http, which I created a few years ago, when I thought that fluent interfaces were a good thing. Here is how you use the library to make an HTTP request and validate its output:
String html = new JdkRequest("https://www.google.com")
.method("GET")
.fetch()
.as(RestResponse.class)
.assertStatus(200)
.body();This convenient method chaining makes the code short and obvious, right? Yes, it does, on the surface. But the internal design of the library’s classes, including JdkRequest, which is the one you see, is very far from being elegant. The biggest problem is that they are rather big and it’s difficult impossible to extend them without making them even bigger.
For example, right now JdkRequest has the methods method(), fetch(), and a few others. What happens when new functionality is required? The only way to add to it would be to make the class bigger, by adding new methods, which is how we jeopardize its maintainability. Here, for example, we added multipartBody() and here we added timeout().
I always feel scared when I get a new feature request in jcabi-http. I understand that it most probably means adding new methods to Request, Response, and other already bloated interfaces and classes.
I actually tried to do something in the library in order to solve this problem but it wasn’t easy. Look at this .as(RestResponse.class) method call. What it does is decorate a Response with RestResponse, in order to make it method-richer. I just didn’t want to make Response contain 50+ methods, like many other libraries do. Here is what it does (this is pseudo-code):
class Response {
RestResponse as() {
return new RestResponse(this);
}
// Seven methods
}
class RestResponse implements Response {
private final Response origin;
// Original seven methods from Response
// Additional 14 methods
}As you see, instead of adding all possible methods to Response I placed them in supplementary decorators RestResponse, JsonResponse, XmlResponse, and others. It helps, but in order to write these decorators with the central object of type Response we have to use that “ugly” method as(), which depends heavily on Reflection and type casting.
In other words, fluent interfaces mean large classes or some ugly workarounds. I mentioned this problem earlier, when I wrote about Streams API and the interface Stream, which is perfectly fluent. There are 43 methods!
That is the biggest problem with fluent interfaces—they force objects to be huge.
Fluent interfaces are perfect for their users, since all methods are in one place and the amount of classes is very small. It is easy to use them, especially with code auto-completion in most IDEs. They also make client code more readable, since “fluent” constructs look similar to plain English (aka DSL).
That is all true! However, the damage they cause to object design is the price, which is too high.
What is the alternative?
I would recommend you use decorators and smart objects instead. Here is how I would design jcabi-http, if I could do it now:
String html = new BodyOfResponse(
new ResponseAssertStatus(
new RequestWithMethod(
new JdkRequest("https://www.google.com"),
"GET"
),
200
)
).toString();This is the same code as in the first snippet above, but it is much more object-oriented. The obvious problem with this code, of course, is that the IDE won’t be able to auto-complete almost anything. Also, we will have to remember many of the names of the classes. And the construct looks rather difficult to read for those who are used to fluent interfaces. In addition, it’s very far away from the DSL idea.
But here is the list of benefits. First, each object is small, very cohesive and they are all loosely coupled—which are obvious merits in OOP. Second, adding new functionality to the library is as easy as creating a new class; no need to touch existing classes. Third, unit testing is simplified, since classes are small. Fourth, all classes can be immutable, which is also an obvious merit in OOP.
Thus, there seems to be a conflict between usefulness and maintainability. Fluent interfaces are good for users, but bad for library developers. Small objects are good for developers, but difficult to understand and use.
It seems to be so, but only if you are used to large classes and procedural programming. To me, a large amount of small classes seems to be an advantage, not a drawback. Libraries that are clear, simple, and readable inside are much easier to use, even when I don’t know exactly which classes out there are the most suitable for me. Even without the code-auto-complete I can figure it out myself, because the code is clean.
Also, I very often find myself interested in extending existing functionality either inside my code base or via a pull request to the library. I am much more interested to do that if I know that the changes I introduce are isolated and easy to test.
Thus, no fluent interfaces anymore from me, only objects and decorators.
"/>
https://www.yegor256.com/2018/03/13/fluent-interfaces.html
Fluent Interfaces Are Bad for Maintainability
- Moscow, Russia
- Yegor Bugayenko
- comments
- Discussed at:
- hackernews
Fluent interface, first coined as a term by Martin Fowler, is a very convenient way of communicating with objects in OOP. It makes their facades easier to use and understand. However, it ruins their internal design, making them more difficult to maintain. A few words were said about that by Marco Pivetta in his blog post Fluent Interfaces are Evil; now I will add my few cents.

Let’s take my own library jcabi-http, which I created a few years ago, when I thought that fluent interfaces were a good thing. Here is how you use the library to make an HTTP request and validate its output:
String html = new JdkRequest("https://www.google.com")
.method("GET")
.fetch()
.as(RestResponse.class)
.assertStatus(200)
.body();This convenient method chaining makes the code short and obvious, right? Yes, it does, on the surface. But the internal design of the library’s classes, including JdkRequest, which is the one you see, is very far from being elegant. The biggest problem is that they are rather big and it’s difficult impossible to extend them without making them even bigger.
For example, right now JdkRequest has the methods method(), fetch(), and a few others. What happens when new functionality is required? The only way to add to it would be to make the class bigger, by adding new methods, which is how we jeopardize its maintainability. Here, for example, we added multipartBody() and here we added timeout().
I always feel scared when I get a new feature request in jcabi-http. I understand that it most probably means adding new methods to Request, Response, and other already bloated interfaces and classes.
I actually tried to do something in the library in order to solve this problem but it wasn’t easy. Look at this .as(RestResponse.class) method call. What it does is decorate a Response with RestResponse, in order to make it method-richer. I just didn’t want to make Response contain 50+ methods, like many other libraries do. Here is what it does (this is pseudo-code):
class Response {
RestResponse as() {
return new RestResponse(this);
}
// Seven methods
}
class RestResponse implements Response {
private final Response origin;
// Original seven methods from Response
// Additional 14 methods
}As you see, instead of adding all possible methods to Response I placed them in supplementary decorators RestResponse, JsonResponse, XmlResponse, and others. It helps, but in order to write these decorators with the central object of type Response we have to use that “ugly” method as(), which depends heavily on Reflection and type casting.
In other words, fluent interfaces mean large classes or some ugly workarounds. I mentioned this problem earlier, when I wrote about Streams API and the interface Stream, which is perfectly fluent. There are 43 methods!
That is the biggest problem with fluent interfaces—they force objects to be huge.
Fluent interfaces are perfect for their users, since all methods are in one place and the amount of classes is very small. It is easy to use them, especially with code auto-completion in most IDEs. They also make client code more readable, since “fluent” constructs look similar to plain English (aka DSL).
That is all true! However, the damage they cause to object design is the price, which is too high.
What is the alternative?
I would recommend you use decorators and smart objects instead. Here is how I would design jcabi-http, if I could do it now:
String html = new BodyOfResponse(
new ResponseAssertStatus(
new RequestWithMethod(
new JdkRequest("https://www.google.com"),
"GET"
),
200
)
).toString();This is the same code as in the first snippet above, but it is much more object-oriented. The obvious problem with this code, of course, is that the IDE won’t be able to auto-complete almost anything. Also, we will have to remember many of the names of the classes. And the construct looks rather difficult to read for those who are used to fluent interfaces. In addition, it’s very far away from the DSL idea.
But here is the list of benefits. First, each object is small, very cohesive and they are all loosely coupled—which are obvious merits in OOP. Second, adding new functionality to the library is as easy as creating a new class; no need to touch existing classes. Third, unit testing is simplified, since classes are small. Fourth, all classes can be immutable, which is also an obvious merit in OOP.
Thus, there seems to be a conflict between usefulness and maintainability. Fluent interfaces are good for users, but bad for library developers. Small objects are good for developers, but difficult to understand and use.
It seems to be so, but only if you are used to large classes and procedural programming. To me, a large amount of small classes seems to be an advantage, not a drawback. Libraries that are clear, simple, and readable inside are much easier to use, even when I don’t know exactly which classes out there are the most suitable for me. Even without the code-auto-complete I can figure it out myself, because the code is clean.
Also, I very often find myself interested in extending existing functionality either inside my code base or via a pull request to the library. I am much more interested to do that if I know that the changes I introduce are isolated and easy to test.
Thus, no fluent interfaces anymore from me, only objects and decorators.
Fluent interface, first coined as a term by Martin Fowler, is a very convenient way of communicating with objects in OOP. It makes their facades easier to use and understand. However, it ruins their internal design, making them more difficult to maintain. A few words were said about that by Marco Pivetta in his blog post Fluent Interfaces are Evil; now I will add my few cents.

Let’s take my own library jcabi-http, which I created a few years ago, when I thought that fluent interfaces were a good thing. Here is how you use the library to make an HTTP request and validate its output:
String html = new JdkRequest("https://www.google.com")
.method("GET")
.fetch()
.as(RestResponse.class)
.assertStatus(200)
.body();This convenient method chaining makes the code short and obvious, right? Yes, it does, on the surface. But the internal design of the library’s classes, including JdkRequest, which is the one you see, is very far from being elegant. The biggest problem is that they are rather big and it’s difficult impossible to extend them without making them even bigger.
For example, right now JdkRequest has the methods method(), fetch(), and a few others. What happens when new functionality is required? The only way to add to it would be to make the class bigger, by adding new methods, which is how we jeopardize its maintainability. Here, for example, we added multipartBody() and here we added timeout().
I always feel scared when I get a new feature request in jcabi-http. I understand that it most probably means adding new methods to Request, Response, and other already bloated interfaces and classes.
I actually tried to do something in the library in order to solve this problem but it wasn’t easy. Look at this .as(RestResponse.class) method call. What it does is decorate a Response with RestResponse, in order to make it method-richer. I just didn’t want to make Response contain 50+ methods, like many other libraries do. Here is what it does (this is pseudo-code):
class Response {
RestResponse as() {
return new RestResponse(this);
}
// Seven methods
}
class RestResponse implements Response {
private final Response origin;
// Original seven methods from Response
// Additional 14 methods
}As you see, instead of adding all possible methods to Response I placed them in supplementary decorators RestResponse, JsonResponse, XmlResponse, and others. It helps, but in order to write these decorators with the central object of type Response we have to use that “ugly” method as(), which depends heavily on Reflection and type casting.
In other words, fluent interfaces mean large classes or some ugly workarounds. I mentioned this problem earlier, when I wrote about Streams API and the interface Stream, which is perfectly fluent. There are 43 methods!
That is the biggest problem with fluent interfaces—they force objects to be huge.
Fluent interfaces are perfect for their users, since all methods are in one place and the amount of classes is very small. It is easy to use them, especially with code auto-completion in most IDEs. They also make client code more readable, since “fluent” constructs look similar to plain English (aka DSL).
That is all true! However, the damage they cause to object design is the price, which is too high.
What is the alternative?
I would recommend you use decorators and smart objects instead. Here is how I would design jcabi-http, if I could do it now:
String html = new BodyOfResponse(
new ResponseAssertStatus(
new RequestWithMethod(
new JdkRequest("https://www.google.com"),
"GET"
),
200
)
).toString();This is the same code as in the first snippet above, but it is much more object-oriented. The obvious problem with this code, of course, is that the IDE won’t be able to auto-complete almost anything. Also, we will have to remember many of the names of the classes. And the construct looks rather difficult to read for those who are used to fluent interfaces. In addition, it’s very far away from the DSL idea.
But here is the list of benefits. First, each object is small, very cohesive and they are all loosely coupled—which are obvious merits in OOP. Second, adding new functionality to the library is as easy as creating a new class; no need to touch existing classes. Third, unit testing is simplified, since classes are small. Fourth, all classes can be immutable, which is also an obvious merit in OOP.
Thus, there seems to be a conflict between usefulness and maintainability. Fluent interfaces are good for users, but bad for library developers. Small objects are good for developers, but difficult to understand and use.
It seems to be so, but only if you are used to large classes and procedural programming. To me, a large amount of small classes seems to be an advantage, not a drawback. Libraries that are clear, simple, and readable inside are much easier to use, even when I don’t know exactly which classes out there are the most suitable for me. Even without the code-auto-complete I can figure it out myself, because the code is clean.
Also, I very often find myself interested in extending existing functionality either inside my code base or via a pull request to the library. I am much more interested to do that if I know that the changes I introduce are isolated and easy to test.
Thus, no fluent interfaces anymore from me, only objects and decorators.
Please, use syntax highlighting in your comments, to make them more readable.

Say there is a back-end entry point, which is supposed to register a new book in the library, arriving in JSON:
{
"title": "Object Thinking",
"isbn: "0735619654",
"author: "David West"
}Also, there is an object of class Library, which expects an object of type Book to be given to its method register():
class Library {
public void register(Book book) {
// Create a new record in the database
}
}Say also, type Book has a simple method isbn():
interface Book {
String isbn();
}Now, here is the HTTP entry point (I’m using Takes and Cactoos), which is accepting a POST multipart/form-data request and registering the book in the library:
public class TkUpload implements Take {
private final Library library;
@Override
public Response act(Request req) {
String body = new RqPrint(
new RqMtSmart(new RqMtBase(req)).single("book")
).printBody();
JsonObject json = Json.createReader(
new InputStreamOf(body)
).readObject();
Book book = new BookDTO();
book.setIsbn(json.getString("isbn"));
library.register(book);
}
}What is wrong with this? Well, a few things.
First, it’s not reusable. If we were to need something similar in a different place, we would have to write this HTTP processing and JSON parsing again.
Second, error handling and validation are not reusable either. If we add it to the method above, we will have to copy it everywhere. Of course, the DTO may encapsulate it, but that’s not what DTOs are usually for.
Third, the code above is rather procedural and has a lot of temporal coupling.
A better design would be to hide this parsing inside a new class JsonBook:
class JsonBook implements Book {
private final String json;
JsonBook(String body) {
this.json = body;
}
@Override
public String isbn() {
return Json.createReader(
new InputStreamOf(body)
).readObject().getString("isbn");
}
}Then, the RESTful entry point will look like this:
public class TkUpload implements Take {
private final Library library;
@Override
public Response act(Request req) {
library.register(
new JsonBook(
new RqPrint(
new RqMtSmart(
new RqMtBase(req)
).single("book")
).printBody()
)
);
}
}Isn’t that more elegant?
Here are some examples from my projects: RqUser from zerocracy/farm and RqUser from yegor256/jare.
As you can see from the examples above, sometimes we can’t use implements because some primitives in Java are not interfaces but final classes: String is a “perfect” example. That’s why I have to do this:
class RqUser implements Scalar<String> {
@Override
public String value() {
// Parsing happens here and returns String
}
}But aside from that, these examples perfectly demonstrate the principle of “parsing objects” suggested above.
" /> data transfer objects, which are serialized into JSON before going out and deserialized when coming back. This way is as much popular as it is wrong. The serialization part should be replaced by printers, which I explained earlier. Here is my take on deserialization, which should be done by—guess what—objects.
Say there is a back-end entry point, which is supposed to register a new book in the library, arriving in JSON:
{
"title": "Object Thinking",
"isbn: "0735619654",
"author: "David West"
}Also, there is an object of class Library, which expects an object of type Book to be given to its method register():
class Library {
public void register(Book book) {
// Create a new record in the database
}
}Say also, type Book has a simple method isbn():
interface Book {
String isbn();
}Now, here is the HTTP entry point (I’m using Takes and Cactoos), which is accepting a POST multipart/form-data request and registering the book in the library:
public class TkUpload implements Take {
private final Library library;
@Override
public Response act(Request req) {
String body = new RqPrint(
new RqMtSmart(new RqMtBase(req)).single("book")
).printBody();
JsonObject json = Json.createReader(
new InputStreamOf(body)
).readObject();
Book book = new BookDTO();
book.setIsbn(json.getString("isbn"));
library.register(book);
}
}What is wrong with this? Well, a few things.
First, it’s not reusable. If we were to need something similar in a different place, we would have to write this HTTP processing and JSON parsing again.
Second, error handling and validation are not reusable either. If we add it to the method above, we will have to copy it everywhere. Of course, the DTO may encapsulate it, but that’s not what DTOs are usually for.
Third, the code above is rather procedural and has a lot of temporal coupling.
A better design would be to hide this parsing inside a new class JsonBook:
class JsonBook implements Book {
private final String json;
JsonBook(String body) {
this.json = body;
}
@Override
public String isbn() {
return Json.createReader(
new InputStreamOf(body)
).readObject().getString("isbn");
}
}Then, the RESTful entry point will look like this:
public class TkUpload implements Take {
private final Library library;
@Override
public Response act(Request req) {
library.register(
new JsonBook(
new RqPrint(
new RqMtSmart(
new RqMtBase(req)
).single("book")
).printBody()
)
);
}
}Isn’t that more elegant?
Here are some examples from my projects: RqUser from zerocracy/farm and RqUser from yegor256/jare.
As you can see from the examples above, sometimes we can’t use implements because some primitives in Java are not interfaces but final classes: String is a “perfect” example. That’s why I have to do this:
class RqUser implements Scalar<String> {
@Override
public String value() {
// Parsing happens here and returns String
}
}But aside from that, these examples perfectly demonstrate the principle of “parsing objects” suggested above.
"/>
https://www.yegor256.com/2018/02/27/parsing-objects.html
Don't Parse, Use Parsing Objects
- Moscow, Russia
- Yegor Bugayenko
- comments
The traditional way of integrating object-oriented back-end with an external system is through data transfer objects, which are serialized into JSON before going out and deserialized when coming back. This way is as much popular as it is wrong. The serialization part should be replaced by printers, which I explained earlier. Here is my take on deserialization, which should be done by—guess what—objects.

Say there is a back-end entry point, which is supposed to register a new book in the library, arriving in JSON:
{
"title": "Object Thinking",
"isbn: "0735619654",
"author: "David West"
}Also, there is an object of class Library, which expects an object of type Book to be given to its method register():
class Library {
public void register(Book book) {
// Create a new record in the database
}
}Say also, type Book has a simple method isbn():
interface Book {
String isbn();
}Now, here is the HTTP entry point (I’m using Takes and Cactoos), which is accepting a POST multipart/form-data request and registering the book in the library:
public class TkUpload implements Take {
private final Library library;
@Override
public Response act(Request req) {
String body = new RqPrint(
new RqMtSmart(new RqMtBase(req)).single("book")
).printBody();
JsonObject json = Json.createReader(
new InputStreamOf(body)
).readObject();
Book book = new BookDTO();
book.setIsbn(json.getString("isbn"));
library.register(book);
}
}What is wrong with this? Well, a few things.
First, it’s not reusable. If we were to need something similar in a different place, we would have to write this HTTP processing and JSON parsing again.
Second, error handling and validation are not reusable either. If we add it to the method above, we will have to copy it everywhere. Of course, the DTO may encapsulate it, but that’s not what DTOs are usually for.
Third, the code above is rather procedural and has a lot of temporal coupling.
A better design would be to hide this parsing inside a new class JsonBook:
class JsonBook implements Book {
private final String json;
JsonBook(String body) {
this.json = body;
}
@Override
public String isbn() {
return Json.createReader(
new InputStreamOf(body)
).readObject().getString("isbn");
}
}Then, the RESTful entry point will look like this:
public class TkUpload implements Take {
private final Library library;
@Override
public Response act(Request req) {
library.register(
new JsonBook(
new RqPrint(
new RqMtSmart(
new RqMtBase(req)
).single("book")
).printBody()
)
);
}
}Isn’t that more elegant?
Here are some examples from my projects: RqUser from zerocracy/farm and RqUser from yegor256/jare.
As you can see from the examples above, sometimes we can’t use implements because some primitives in Java are not interfaces but final classes: String is a “perfect” example. That’s why I have to do this:
class RqUser implements Scalar<String> {
@Override
public String value() {
// Parsing happens here and returns String
}
}But aside from that, these examples perfectly demonstrate the principle of “parsing objects” suggested above.
The traditional way of integrating object-oriented back-end with an external system is through data transfer objects, which are serialized into JSON before going out and deserialized when coming back. This way is as much popular as it is wrong. The serialization part should be replaced by printers, which I explained earlier. Here is my take on deserialization, which should be done by—guess what—objects.

Say there is a back-end entry point, which is supposed to register a new book in the library, arriving in JSON:
{
"title": "Object Thinking",
"isbn: "0735619654",
"author: "David West"
}Also, there is an object of class Library, which expects an object of type Book to be given to its method register():
class Library {
public void register(Book book) {
// Create a new record in the database
}
}Say also, type Book has a simple method isbn():
interface Book {
String isbn();
}Now, here is the HTTP entry point (I’m using Takes and Cactoos), which is accepting a POST multipart/form-data request and registering the book in the library:
public class TkUpload implements Take {
private final Library library;
@Override
public Response act(Request req) {
String body = new RqPrint(
new RqMtSmart(new RqMtBase(req)).single("book")
).printBody();
JsonObject json = Json.createReader(
new InputStreamOf(body)
).readObject();
Book book = new BookDTO();
book.setIsbn(json.getString("isbn"));
library.register(book);
}
}What is wrong with this? Well, a few things.
First, it’s not reusable. If we were to need something similar in a different place, we would have to write this HTTP processing and JSON parsing again.
Second, error handling and validation are not reusable either. If we add it to the method above, we will have to copy it everywhere. Of course, the DTO may encapsulate it, but that’s not what DTOs are usually for.
Third, the code above is rather procedural and has a lot of temporal coupling.
A better design would be to hide this parsing inside a new class JsonBook:
class JsonBook implements Book {
private final String json;
JsonBook(String body) {
this.json = body;
}
@Override
public String isbn() {
return Json.createReader(
new InputStreamOf(body)
).readObject().getString("isbn");
}
}Then, the RESTful entry point will look like this:
public class TkUpload implements Take {
private final Library library;
@Override
public Response act(Request req) {
library.register(
new JsonBook(
new RqPrint(
new RqMtSmart(
new RqMtBase(req)
).single("book")
).printBody()
)
);
}
}Isn’t that more elegant?
Here are some examples from my projects: RqUser from zerocracy/farm and RqUser from yegor256/jare.
As you can see from the examples above, sometimes we can’t use implements because some primitives in Java are not interfaces but final classes: String is a “perfect” example. That’s why I have to do this:
class RqUser implements Scalar<String> {
@Override
public String value() {
// Parsing happens here and returns String
}
}But aside from that, these examples perfectly demonstrate the principle of “parsing objects” suggested above.
Please, use syntax highlighting in your comments, to make them more readable.

I’m sure you understand that the problem with this operator is that it couples objects, making testing and reuse very difficult or even impossible. Let’s say there is a story in a file that we need to read as a UTF-8 text (I’m using TextOf from Cactoos):
class Story {
String text() {
return new TextOf(
new File("/tmp/story.txt")
).asString();
}
}It seems super simple, but the problem is obvious: class Story can’t be reused. It can only read one particular file. Moreover, testing it will be rather difficult, since it reads the content from exactly one place, which can’t be changed at all. More formally this problem is known as an unbreakable dependency—we can’t break the link between Story and /tmp/story.txt—they are together forever.
To solve this we need to introduce a constructor and let Story accept the location of the content as an argument:
class Story {
private final File file;
Story(File f) {
this.file = f;
}
String text() {
return new TextOf(this.file).asString();
}
}Now, each user of the Story has to know the name of the file:
new Story(new File("/tmp/story.txt"));It’s not really convenient, especially for those users who were using Story before, knowing nothing about the file path. To help them we introduce a secondary constructor:
class Story {
private final File file;
Story() { // Here!
this(new File("/tmp/story.txt"));
}
Story(File f) {
this.file = f;
}
String text() {
return new TextOf(this.file).asString();
}
}Now we just make an instance through a no-arguments constructor, just like we did before:
new Story();I’m sure you’re well aware of this technique, which is also known as dependency injection. I’m actually not saying anything new. What I want you to pay attention to here is the location and the amount of new operators in all three code snippets.
In the first snippet both new operators are in the method text(). In the second snippet we lost one of them. In the third snippet one operator is in the method, while the second one moved up, to the constructor.
Remember this fact and let’s move on.
What if the file is not in UTF-8 encoding but in KOI8-R? Class TextOf and then method Story.text() will throw an exception. However, class TextOf is capable of reading in any encoding, it just needs to have a secondary argument for its constructor:
new TextOf(this.file, "KOI8_R").asString();In order to make Story capable of using different encodings, we need to introduce a few additional secondary constructors and modify its primary constructor:
class Story {
private final Text text;
Story() {
this(new File("/tmp/story.txt"));
}
Story(File f) {
this(f, StandardEncodings.UTF_8);
}
Story(File f, Encoding e) {
this(new TextOf(f, e));
}
Story(Text t) {
this.text = t;
}
String text() {
return this.text.asString();
}
}It’s just dependency injection, but pay attention to the locations of the operator new. They are all in the constructors now and none of them are left in the method text().
The tendency here is obvious to me: the more the new operators stay in the methods, the less reusable and testable is the class.
In other words, operator new is a rather toxic thing, so try to keep its usage to a minimum in your methods. Make sure you instantiate everything or almost everything in your secondary constructors.

I’m sure you understand that the problem with this operator is that it couples objects, making testing and reuse very difficult or even impossible. Let’s say there is a story in a file that we need to read as a UTF-8 text (I’m using TextOf from Cactoos):
class Story {
String text() {
return new TextOf(
new File("/tmp/story.txt")
).asString();
}
}It seems super simple, but the problem is obvious: class Story can’t be reused. It can only read one particular file. Moreover, testing it will be rather difficult, since it reads the content from exactly one place, which can’t be changed at all. More formally this problem is known as an unbreakable dependency—we can’t break the link between Story and /tmp/story.txt—they are together forever.
To solve this we need to introduce a constructor and let Story accept the location of the content as an argument:
class Story {
private final File file;
Story(File f) {
this.file = f;
}
String text() {
return new TextOf(this.file).asString();
}
}Now, each user of the Story has to know the name of the file:
new Story(new File("/tmp/story.txt"));It’s not really convenient, especially for those users who were using Story before, knowing nothing about the file path. To help them we introduce a secondary constructor:
class Story {
private final File file;
Story() { // Here!
this(new File("/tmp/story.txt"));
}
Story(File f) {
this.file = f;
}
String text() {
return new TextOf(this.file).asString();
}
}Now we just make an instance through a no-arguments constructor, just like we did before:
new Story();I’m sure you’re well aware of this technique, which is also known as dependency injection. I’m actually not saying anything new. What I want you to pay attention to here is the location and the amount of new operators in all three code snippets.
In the first snippet both new operators are in the method text(). In the second snippet we lost one of them. In the third snippet one operator is in the method, while the second one moved up, to the constructor.
Remember this fact and let’s move on.
What if the file is not in UTF-8 encoding but in KOI8-R? Class TextOf and then method Story.text() will throw an exception. However, class TextOf is capable of reading in any encoding, it just needs to have a secondary argument for its constructor:
new TextOf(this.file, "KOI8_R").asString();In order to make Story capable of using different encodings, we need to introduce a few additional secondary constructors and modify its primary constructor:
class Story {
private final Text text;
Story() {
this(new File("/tmp/story.txt"));
}
Story(File f) {
this(f, StandardEncodings.UTF_8);
}
Story(File f, Encoding e) {
this(new TextOf(f, e));
}
Story(Text t) {
this.text = t;
}
String text() {
return this.text.asString();
}
}It’s just dependency injection, but pay attention to the locations of the operator new. They are all in the constructors now and none of them are left in the method text().
The tendency here is obvious to me: the more the new operators stay in the methods, the less reusable and testable is the class.
In other words, operator new is a rather toxic thing, so try to keep its usage to a minimum in your methods. Make sure you instantiate everything or almost everything in your secondary constructors.
https://www.yegor256.com/2018/01/02/operator-new-is-toxic.html
Operator new() is Toxic
- Moscow, Russia
- Yegor Bugayenko
- comments
To instantiate objects, in most object-oriented languages, including Java, Ruby, and C++, we use operator new(). Well, unless we use static factory methods, which we don’t use because they are evil. Even though it looks so easy to make a new object any time we need it, I would recommend to be more careful with this rather toxic operator.

I’m sure you understand that the problem with this operator is that it couples objects, making testing and reuse very difficult or even impossible. Let’s say there is a story in a file that we need to read as a UTF-8 text (I’m using TextOf from Cactoos):
class Story {
String text() {
return new TextOf(
new File("/tmp/story.txt")
).asString();
}
}It seems super simple, but the problem is obvious: class Story can’t be reused. It can only read one particular file. Moreover, testing it will be rather difficult, since it reads the content from exactly one place, which can’t be changed at all. More formally this problem is known as an unbreakable dependency—we can’t break the link between Story and /tmp/story.txt—they are together forever.
To solve this we need to introduce a constructor and let Story accept the location of the content as an argument:
class Story {
private final File file;
Story(File f) {
this.file = f;
}
String text() {
return new TextOf(this.file).asString();
}
}Now, each user of the Story has to know the name of the file:
new Story(new File("/tmp/story.txt"));It’s not really convenient, especially for those users who were using Story before, knowing nothing about the file path. To help them we introduce a secondary constructor:
class Story {
private final File file;
Story() { // Here!
this(new File("/tmp/story.txt"));
}
Story(File f) {
this.file = f;
}
String text() {
return new TextOf(this.file).asString();
}
}Now we just make an instance through a no-arguments constructor, just like we did before:
new Story();I’m sure you’re well aware of this technique, which is also known as dependency injection. I’m actually not saying anything new. What I want you to pay attention to here is the location and the amount of new operators in all three code snippets.
In the first snippet both new operators are in the method text(). In the second snippet we lost one of them. In the third snippet one operator is in the method, while the second one moved up, to the constructor.
Remember this fact and let’s move on.
What if the file is not in UTF-8 encoding but in KOI8-R? Class TextOf and then method Story.text() will throw an exception. However, class TextOf is capable of reading in any encoding, it just needs to have a secondary argument for its constructor:
new TextOf(this.file, "KOI8_R").asString();In order to make Story capable of using different encodings, we need to introduce a few additional secondary constructors and modify its primary constructor:
class Story {
private final Text text;
Story() {
this(new File("/tmp/story.txt"));
}
Story(File f) {
this(f, StandardEncodings.UTF_8);
}
Story(File f, Encoding e) {
this(new TextOf(f, e));
}
Story(Text t) {
this.text = t;
}
String text() {
return this.text.asString();
}
}It’s just dependency injection, but pay attention to the locations of the operator new. They are all in the constructors now and none of them are left in the method text().
The tendency here is obvious to me: the more the new operators stay in the methods, the less reusable and testable is the class.
In other words, operator new is a rather toxic thing, so try to keep its usage to a minimum in your methods. Make sure you instantiate everything or almost everything in your secondary constructors.
To instantiate objects, in most object-oriented languages, including Java, Ruby, and C++, we use operator new(). Well, unless we use static factory methods, which we don’t use because they are evil. Even though it looks so easy to make a new object any time we need it, I would recommend to be more careful with this rather toxic operator.

I’m sure you understand that the problem with this operator is that it couples objects, making testing and reuse very difficult or even impossible. Let’s say there is a story in a file that we need to read as a UTF-8 text (I’m using TextOf from Cactoos):
class Story {
String text() {
return new TextOf(
new File("/tmp/story.txt")
).asString();
}
}It seems super simple, but the problem is obvious: class Story can’t be reused. It can only read one particular file. Moreover, testing it will be rather difficult, since it reads the content from exactly one place, which can’t be changed at all. More formally this problem is known as an unbreakable dependency—we can’t break the link between Story and /tmp/story.txt—they are together forever.
To solve this we need to introduce a constructor and let Story accept the location of the content as an argument:
class Story {
private final File file;
Story(File f) {
this.file = f;
}
String text() {
return new TextOf(this.file).asString();
}
}Now, each user of the Story has to know the name of the file:
new Story(new File("/tmp/story.txt"));It’s not really convenient, especially for those users who were using Story before, knowing nothing about the file path. To help them we introduce a secondary constructor:
class Story {
private final File file;
Story() { // Here!
this(new File("/tmp/story.txt"));
}
Story(File f) {
this.file = f;
}
String text() {
return new TextOf(this.file).asString();
}
}Now we just make an instance through a no-arguments constructor, just like we did before:
new Story();I’m sure you’re well aware of this technique, which is also known as dependency injection. I’m actually not saying anything new. What I want you to pay attention to here is the location and the amount of new operators in all three code snippets.
In the first snippet both new operators are in the method text(). In the second snippet we lost one of them. In the third snippet one operator is in the method, while the second one moved up, to the constructor.
Remember this fact and let’s move on.
What if the file is not in UTF-8 encoding but in KOI8-R? Class TextOf and then method Story.text() will throw an exception. However, class TextOf is capable of reading in any encoding, it just needs to have a secondary argument for its constructor:
new TextOf(this.file, "KOI8_R").asString();In order to make Story capable of using different encodings, we need to introduce a few additional secondary constructors and modify its primary constructor:
class Story {
private final Text text;
Story() {
this(new File("/tmp/story.txt"));
}
Story(File f) {
this(f, StandardEncodings.UTF_8);
}
Story(File f, Encoding e) {
this(new TextOf(f, e));
}
Story(Text t) {
this.text = t;
}
String text() {
return this.text.asString();
}
}It’s just dependency injection, but pay attention to the locations of the operator new. They are all in the constructors now and none of them are left in the method text().
The tendency here is obvious to me: the more the new operators stay in the methods, the less reusable and testable is the class.
In other words, operator new is a rather toxic thing, so try to keep its usage to a minimum in your methods. Make sure you instantiate everything or almost everything in your secondary constructors.
Please, use syntax highlighting in your comments, to make them more readable.

I mentioned SRP once in my post about SOLID, saying that it doesn’t really help programmers understand the good old “high cohesion” concept, which was introduced by Larry Constantine back in 1974. Now let’s see it by example and analyze how we can improve a class, with the SRP in mind, and whether it will become more object-oriented.
Let’s try the class AwsOcket from jcabi-s3 (I’ve simplified the code):
class AwsOcket {
boolean exists() { /* ... */ }
void read(final OutputStream output) { /* ... */ }
void write(final InputStream input) { /* ... */ }
}Correct me if I’m wrong, but according to SRP this class is responsible for too many things: 1) checking the existence of an object in AWS S3, 2) reading its content, and 3) modifying its content. Right? It’s not a good design and it must be changed.
In order to change it and make it responsible for just one thing we must introduce a getter, which will return an AWS client and then create three new classes: ExistenceChecker, ContentReader, and ContentWriter. They will check, read, and write. Now, in order to read the content and print it to the console I’m currently doing this:
if (ocket.exists()) {
ocket.read(System.out);
}Tomorrow, if I refactor the class, I will be doing this:
if (new ExistenceChecker(ocket.aws()).exists()) {
new ContentReader(ocket.aws()).read(System.out);
}Aside from the fact that these checkers, readers, and writers are not really classes, but pure holders of procedures, the usage of this ocket turns into a nightmare. We can’t really know anymore what will happen with it when we pass it somewhere. We can’t, for example, guarantee that the content that is coming from it is decrypted or decoded on the fly. We simply can’t decorate it. It is not an object anymore, but a holder of an AWS client, which is used by some other classes somewhere.
Yes, now it is responsible for only one thing: encapsulating the reference to the AWS client. It is a perfect class as far as SRP is concerned. But it is not an object anymore.
The same will happen with any class if you apply the SRP principle to its full extent: it will become a holder of data or of other objects, with a collection of setters and getters on top of them. Maybe with one extra method in addition to those.
My point is that SRP is a wrong idea.
Making classes small and cohesive is a good idea, but making them responsible “for one thing” is a misleading simplification of a “high cohesion” concept. It only turns them into dumb carriers of something else, instead of being encapsulators and decorators of smaller entities, to construct bigger ones.
In our fight for this fake SRP idea we lose a much more important principle, which really is about true object-oriented programming and thinking: encapsulation. It is much less important how many things an object is responsible for than how tightly it protects the entities it encapsulates. A monster object with a hundred methods is much less of a problem than a DTO with five pairs of getters and setters! This is because a DTO spreads the problem all over the code, where we can’t even find it, while the monster object is always right in front of us and we can always refactor it into smaller pieces.
Encapsulation comes first, size goes next, if ever.
" /> Single Responsibility Principle, according to Robert Martin’s Clean Code, means that “a class should have only one reason to change.” Let’s try to decrypt this rather vague statement and see how it helps us design better object-oriented software. If it does.What do you think about the Single Responsibility Principle (SRP)? #oop #elegantobjects
— Yegor Bugayenko (@yegor256) September 2, 2018

I mentioned SRP once in my post about SOLID, saying that it doesn’t really help programmers understand the good old “high cohesion” concept, which was introduced by Larry Constantine back in 1974. Now let’s see it by example and analyze how we can improve a class, with the SRP in mind, and whether it will become more object-oriented.
Let’s try the class AwsOcket from jcabi-s3 (I’ve simplified the code):
class AwsOcket {
boolean exists() { /* ... */ }
void read(final OutputStream output) { /* ... */ }
void write(final InputStream input) { /* ... */ }
}Correct me if I’m wrong, but according to SRP this class is responsible for too many things: 1) checking the existence of an object in AWS S3, 2) reading its content, and 3) modifying its content. Right? It’s not a good design and it must be changed.
In order to change it and make it responsible for just one thing we must introduce a getter, which will return an AWS client and then create three new classes: ExistenceChecker, ContentReader, and ContentWriter. They will check, read, and write. Now, in order to read the content and print it to the console I’m currently doing this:
if (ocket.exists()) {
ocket.read(System.out);
}Tomorrow, if I refactor the class, I will be doing this:
if (new ExistenceChecker(ocket.aws()).exists()) {
new ContentReader(ocket.aws()).read(System.out);
}Aside from the fact that these checkers, readers, and writers are not really classes, but pure holders of procedures, the usage of this ocket turns into a nightmare. We can’t really know anymore what will happen with it when we pass it somewhere. We can’t, for example, guarantee that the content that is coming from it is decrypted or decoded on the fly. We simply can’t decorate it. It is not an object anymore, but a holder of an AWS client, which is used by some other classes somewhere.
Yes, now it is responsible for only one thing: encapsulating the reference to the AWS client. It is a perfect class as far as SRP is concerned. But it is not an object anymore.
The same will happen with any class if you apply the SRP principle to its full extent: it will become a holder of data or of other objects, with a collection of setters and getters on top of them. Maybe with one extra method in addition to those.
My point is that SRP is a wrong idea.
Making classes small and cohesive is a good idea, but making them responsible “for one thing” is a misleading simplification of a “high cohesion” concept. It only turns them into dumb carriers of something else, instead of being encapsulators and decorators of smaller entities, to construct bigger ones.
In our fight for this fake SRP idea we lose a much more important principle, which really is about true object-oriented programming and thinking: encapsulation. It is much less important how many things an object is responsible for than how tightly it protects the entities it encapsulates. A monster object with a hundred methods is much less of a problem than a DTO with five pairs of getters and setters! This is because a DTO spreads the problem all over the code, where we can’t even find it, while the monster object is always right in front of us and we can always refactor it into smaller pieces.
Encapsulation comes first, size goes next, if ever.
"/>What do you think about the Single Responsibility Principle (SRP)? #oop #elegantobjects
— Yegor Bugayenko (@yegor256) September 2, 2018
https://www.yegor256.com/2017/12/19/srp-is-hoax.html
SRP is a Hoax
- Moscow, Russia
- Yegor Bugayenko
- comments
The Single Responsibility Principle, according to Robert Martin’s Clean Code, means that “a class should have only one reason to change.” Let’s try to decrypt this rather vague statement and see how it helps us design better object-oriented software. If it does.

I mentioned SRP once in my post about SOLID, saying that it doesn’t really help programmers understand the good old “high cohesion” concept, which was introduced by Larry Constantine back in 1974. Now let’s see it by example and analyze how we can improve a class, with the SRP in mind, and whether it will become more object-oriented.
Let’s try the class AwsOcket from jcabi-s3 (I’ve simplified the code):
class AwsOcket {
boolean exists() { /* ... */ }
void read(final OutputStream output) { /* ... */ }
void write(final InputStream input) { /* ... */ }
}Correct me if I’m wrong, but according to SRP this class is responsible for too many things: 1) checking the existence of an object in AWS S3, 2) reading its content, and 3) modifying its content. Right? It’s not a good design and it must be changed.
In order to change it and make it responsible for just one thing we must introduce a getter, which will return an AWS client and then create three new classes: ExistenceChecker, ContentReader, and ContentWriter. They will check, read, and write. Now, in order to read the content and print it to the console I’m currently doing this:
if (ocket.exists()) {
ocket.read(System.out);
}Tomorrow, if I refactor the class, I will be doing this:
if (new ExistenceChecker(ocket.aws()).exists()) {
new ContentReader(ocket.aws()).read(System.out);
}Aside from the fact that these checkers, readers, and writers are not really classes, but pure holders of procedures, the usage of this ocket turns into a nightmare. We can’t really know anymore what will happen with it when we pass it somewhere. We can’t, for example, guarantee that the content that is coming from it is decrypted or decoded on the fly. We simply can’t decorate it. It is not an object anymore, but a holder of an AWS client, which is used by some other classes somewhere.
Yes, now it is responsible for only one thing: encapsulating the reference to the AWS client. It is a perfect class as far as SRP is concerned. But it is not an object anymore.
The same will happen with any class if you apply the SRP principle to its full extent: it will become a holder of data or of other objects, with a collection of setters and getters on top of them. Maybe with one extra method in addition to those.
My point is that SRP is a wrong idea.
Making classes small and cohesive is a good idea, but making them responsible “for one thing” is a misleading simplification of a “high cohesion” concept. It only turns them into dumb carriers of something else, instead of being encapsulators and decorators of smaller entities, to construct bigger ones.
In our fight for this fake SRP idea we lose a much more important principle, which really is about true object-oriented programming and thinking: encapsulation. It is much less important how many things an object is responsible for than how tightly it protects the entities it encapsulates. A monster object with a hundred methods is much less of a problem than a DTO with five pairs of getters and setters! This is because a DTO spreads the problem all over the code, where we can’t even find it, while the monster object is always right in front of us and we can always refactor it into smaller pieces.
Encapsulation comes first, size goes next, if ever.
What do you think about the Single Responsibility Principle (SRP)? #oop #elegantobjects
— Yegor Bugayenko (@yegor256) September 2, 2018
The Single Responsibility Principle, according to Robert Martin’s Clean Code, means that “a class should have only one reason to change.” Let’s try to decrypt this rather vague statement and see how it helps us design better object-oriented software. If it does.

I mentioned SRP once in my post about SOLID, saying that it doesn’t really help programmers understand the good old “high cohesion” concept, which was introduced by Larry Constantine back in 1974. Now let’s see it by example and analyze how we can improve a class, with the SRP in mind, and whether it will become more object-oriented.
Let’s try the class AwsOcket from jcabi-s3 (I’ve simplified the code):
class AwsOcket {
boolean exists() { /* ... */ }
void read(final OutputStream output) { /* ... */ }
void write(final InputStream input) { /* ... */ }
}Correct me if I’m wrong, but according to SRP this class is responsible for too many things: 1) checking the existence of an object in AWS S3, 2) reading its content, and 3) modifying its content. Right? It’s not a good design and it must be changed.
In order to change it and make it responsible for just one thing we must introduce a getter, which will return an AWS client and then create three new classes: ExistenceChecker, ContentReader, and ContentWriter. They will check, read, and write. Now, in order to read the content and print it to the console I’m currently doing this:
if (ocket.exists()) {
ocket.read(System.out);
}Tomorrow, if I refactor the class, I will be doing this:
if (new ExistenceChecker(ocket.aws()).exists()) {
new ContentReader(ocket.aws()).read(System.out);
}Aside from the fact that these checkers, readers, and writers are not really classes, but pure holders of procedures, the usage of this ocket turns into a nightmare. We can’t really know anymore what will happen with it when we pass it somewhere. We can’t, for example, guarantee that the content that is coming from it is decrypted or decoded on the fly. We simply can’t decorate it. It is not an object anymore, but a holder of an AWS client, which is used by some other classes somewhere.
Yes, now it is responsible for only one thing: encapsulating the reference to the AWS client. It is a perfect class as far as SRP is concerned. But it is not an object anymore.
The same will happen with any class if you apply the SRP principle to its full extent: it will become a holder of data or of other objects, with a collection of setters and getters on top of them. Maybe with one extra method in addition to those.
My point is that SRP is a wrong idea.
Making classes small and cohesive is a good idea, but making them responsible “for one thing” is a misleading simplification of a “high cohesion” concept. It only turns them into dumb carriers of something else, instead of being encapsulators and decorators of smaller entities, to construct bigger ones.
In our fight for this fake SRP idea we lose a much more important principle, which really is about true object-oriented programming and thinking: encapsulation. It is much less important how many things an object is responsible for than how tightly it protects the entities it encapsulates. A monster object with a hundred methods is much less of a problem than a DTO with five pairs of getters and setters! This is because a DTO spreads the problem all over the code, where we can’t even find it, while the monster object is always right in front of us and we can always refactor it into smaller pieces.
Encapsulation comes first, size goes next, if ever.
What do you think about the Single Responsibility Principle (SRP)? #oop #elegantobjects
— Yegor Bugayenko (@yegor256) September 2, 2018
Please, use syntax highlighting in your comments, to make them more readable.

Here is how it may look:
class BookDAO {
Book find(int id);
void update(Book book);
// Other methods here ...
}The idea is simple—method find() creates a DTO Book, someone else injects new data into it and calls update():
BookDAO dao = BookDAOFactory.getBookDAO();
Book book = dao.find(123);
book.setTitle("Don Quixote");
dao.update(book);What is wrong, you ask? Everything that was wrong with ORM, but instead of a “session” we have this DAO. The problem remains the same: the book is not an object, but a data container. I quote my own three-year-old statement from the ORM article, with a slight change in the name: “DAO, instead of encapsulating database interaction inside an object, extracts it away, literally tearing a solid and cohesive living organism apart.” For more details, please check that article.
However, I have to say that I have something like DAOs in most of my pet projects, but they don’t return or accept DTOs. Instead, they return objects and sometimes accept operations on them. Here are a few examples. Look at this Pipes interface from Wring.io:
interface Pipes {
void add(String json);
Pipe pipe(long number);
}Its method add() creates a new item in the “collection” and method pipe() returns a single object from the collection. The Pipe is not a DTO, it is a normal object that is fully capable of doing all necessary database operations, without any help from a DAO. For example, there is Pipe.status(String) method to update its status. I’m not going to use Pipes for that, I just do pipe.status("Hello, world!").
Here is yet another example from Jare.io: interface Base which returns a list of objects of type Domain. Then, when we want to delete a domain, we just call domain.delete(). The domain is fully capable of doing all necessary database manipulations.
The problem with DAO is right in its name, I believe. It says that we are accessing “data” and does exactly that: goes to the database, retrieves some data, and returns data. Not an object, but data, also known as a “data transfer object.” As we discussed before, direct data manipulations are what break encapsulation and make object-oriented code procedural and unmaintainable.
" /> ORM, DTO, and getters, I haven’t had a chance yet to mention DAO. Here is my take on it: it’s as much of a shame as its friends—ORM, DTO, and getters. In a nutshell, a Data Access Object is an object that “provides an abstract interface to some type of database or other persistence mechanism.” The purpose is noble, but the implementation is terrible.
Here is how it may look:
class BookDAO {
Book find(int id);
void update(Book book);
// Other methods here ...
}The idea is simple—method find() creates a DTO Book, someone else injects new data into it and calls update():
BookDAO dao = BookDAOFactory.getBookDAO();
Book book = dao.find(123);
book.setTitle("Don Quixote");
dao.update(book);What is wrong, you ask? Everything that was wrong with ORM, but instead of a “session” we have this DAO. The problem remains the same: the book is not an object, but a data container. I quote my own three-year-old statement from the ORM article, with a slight change in the name: “DAO, instead of encapsulating database interaction inside an object, extracts it away, literally tearing a solid and cohesive living organism apart.” For more details, please check that article.
However, I have to say that I have something like DAOs in most of my pet projects, but they don’t return or accept DTOs. Instead, they return objects and sometimes accept operations on them. Here are a few examples. Look at this Pipes interface from Wring.io:
interface Pipes {
void add(String json);
Pipe pipe(long number);
}Its method add() creates a new item in the “collection” and method pipe() returns a single object from the collection. The Pipe is not a DTO, it is a normal object that is fully capable of doing all necessary database operations, without any help from a DAO. For example, there is Pipe.status(String) method to update its status. I’m not going to use Pipes for that, I just do pipe.status("Hello, world!").
Here is yet another example from Jare.io: interface Base which returns a list of objects of type Domain. Then, when we want to delete a domain, we just call domain.delete(). The domain is fully capable of doing all necessary database manipulations.
The problem with DAO is right in its name, I believe. It says that we are accessing “data” and does exactly that: goes to the database, retrieves some data, and returns data. Not an object, but data, also known as a “data transfer object.” As we discussed before, direct data manipulations are what break encapsulation and make object-oriented code procedural and unmaintainable.
"/>
https://www.yegor256.com/2017/12/05/data-access-object.html
DAO is Yet Another OOP Shame
- Odessa, Ukraine
- Yegor Bugayenko
- comments
Someone asked me what I think about DAO and I realized that, even though I wrote about ORM, DTO, and getters, I haven’t had a chance yet to mention DAO. Here is my take on it: it’s as much of a shame as its friends—ORM, DTO, and getters. In a nutshell, a Data Access Object is an object that “provides an abstract interface to some type of database or other persistence mechanism.” The purpose is noble, but the implementation is terrible.

Here is how it may look:
class BookDAO {
Book find(int id);
void update(Book book);
// Other methods here ...
}The idea is simple—method find() creates a DTO Book, someone else injects new data into it and calls update():
BookDAO dao = BookDAOFactory.getBookDAO();
Book book = dao.find(123);
book.setTitle("Don Quixote");
dao.update(book);What is wrong, you ask? Everything that was wrong with ORM, but instead of a “session” we have this DAO. The problem remains the same: the book is not an object, but a data container. I quote my own three-year-old statement from the ORM article, with a slight change in the name: “DAO, instead of encapsulating database interaction inside an object, extracts it away, literally tearing a solid and cohesive living organism apart.” For more details, please check that article.
However, I have to say that I have something like DAOs in most of my pet projects, but they don’t return or accept DTOs. Instead, they return objects and sometimes accept operations on them. Here are a few examples. Look at this Pipes interface from Wring.io:
interface Pipes {
void add(String json);
Pipe pipe(long number);
}Its method add() creates a new item in the “collection” and method pipe() returns a single object from the collection. The Pipe is not a DTO, it is a normal object that is fully capable of doing all necessary database operations, without any help from a DAO. For example, there is Pipe.status(String) method to update its status. I’m not going to use Pipes for that, I just do pipe.status("Hello, world!").
Here is yet another example from Jare.io: interface Base which returns a list of objects of type Domain. Then, when we want to delete a domain, we just call domain.delete(). The domain is fully capable of doing all necessary database manipulations.
The problem with DAO is right in its name, I believe. It says that we are accessing “data” and does exactly that: goes to the database, retrieves some data, and returns data. Not an object, but data, also known as a “data transfer object.” As we discussed before, direct data manipulations are what break encapsulation and make object-oriented code procedural and unmaintainable.
Someone asked me what I think about DAO and I realized that, even though I wrote about ORM, DTO, and getters, I haven’t had a chance yet to mention DAO. Here is my take on it: it’s as much of a shame as its friends—ORM, DTO, and getters. In a nutshell, a Data Access Object is an object that “provides an abstract interface to some type of database or other persistence mechanism.” The purpose is noble, but the implementation is terrible.

Here is how it may look:
class BookDAO {
Book find(int id);
void update(Book book);
// Other methods here ...
}The idea is simple—method find() creates a DTO Book, someone else injects new data into it and calls update():
BookDAO dao = BookDAOFactory.getBookDAO();
Book book = dao.find(123);
book.setTitle("Don Quixote");
dao.update(book);What is wrong, you ask? Everything that was wrong with ORM, but instead of a “session” we have this DAO. The problem remains the same: the book is not an object, but a data container. I quote my own three-year-old statement from the ORM article, with a slight change in the name: “DAO, instead of encapsulating database interaction inside an object, extracts it away, literally tearing a solid and cohesive living organism apart.” For more details, please check that article.
However, I have to say that I have something like DAOs in most of my pet projects, but they don’t return or accept DTOs. Instead, they return objects and sometimes accept operations on them. Here are a few examples. Look at this Pipes interface from Wring.io:
interface Pipes {
void add(String json);
Pipe pipe(long number);
}Its method add() creates a new item in the “collection” and method pipe() returns a single object from the collection. The Pipe is not a DTO, it is a normal object that is fully capable of doing all necessary database operations, without any help from a DAO. For example, there is Pipe.status(String) method to update its status. I’m not going to use Pipes for that, I just do pipe.status("Hello, world!").
Here is yet another example from Jare.io: interface Base which returns a list of objects of type Domain. Then, when we want to delete a domain, we just call domain.delete(). The domain is fully capable of doing all necessary database manipulations.
The problem with DAO is right in its name, I believe. It says that we are accessing “data” and does exactly that: goes to the database, retrieves some data, and returns data. Not an object, but data, also known as a “data transfer object.” As we discussed before, direct data manipulations are what break encapsulation and make object-oriented code procedural and unmaintainable.
Please, use syntax highlighting in your comments, to make them more readable.

Let’s analyze the reasoning and see why it’s wrong, from an object-oriented point of view.
This is a class with one primary and two secondary constructors:
class Color {
private final int hex;
Color(String rgb) {
this(Integer.parseInt(rgb, 16));
}
Color(int red, int green, int blue) {
this(red << 16 + green << 8 + blue);
}
Color(int h) {
this.hex = h;
}
}This is a similar class with three static factory methods:
class Color {
private final int hex;
static Color makeFromRGB(String rgb) {
return new Color(Integer.parseInt(rgb, 16));
}
static Color makeFromPalette(int red, int green, int blue) {
return new Color(red << 16 + green << 8 + blue);
}
static Color makeFromHex(int h) {
return new Color(h);
}
private Color(int h) {
this.hex = h;
}
}Which one do you like better?
According to Joshua Bloch, there are three basic advantages to using static factory methods instead of constructors (there are actually four, but the fourth one is not applicable to Java anymore):
- They have names.
- They can cache.
- They can subtype.
I believe that all three make perfect sense … if the design is wrong. They are good excuses for workarounds. Let’s take them one by one.
They Have Names
This is how you make a red tomato color object with a constructor:
Color tomato = new Color(255, 99, 71);This is how you do it with a static factory method:
Color tomato = Color.makeFromPalette(255, 99, 71);It seems that makeFromPalette() is semantically richer than just new Color(), right? Well, yes. Who knows what those three numbers mean if we just pass them to the constructor. But the word “palette” helps us figure everything out immediately.
True.
However, the right solution would be to use polymorphism and encapsulation, to decompose the problem into a few semantically rich classes:
interface Color {
}
class HexColor implements Color {
private final int hex;
HexColor(int h) {
this.hex = h;
}
}
class RGBColor implements Color {
private final Color origin;
RGBColor(int red, int green, int blue) {
this.origin = new HexColor(
red << 16 + green << 8 + blue
);
}
}Now, we use the right constructor of the right class:
Color tomato = new RGBColor(255, 99, 71);See, Joshua?
They Can Cache
Let’s say I need a red tomato color in multiple places in the application:
Color tomato = new Color(255, 99, 71);
// ... sometime later
Color red = new Color(255, 99, 71);Two objects will be created, which is obviously inefficient, since they are identical. It would be better to keep the first instance somewhere in memory and return it when the second call arrives. Static factory methods make it possible to solve this very problem:
Color tomato = Color.makeFromPalette(255, 99, 71);
// ... sometime later
Color red = Color.makeFromPalette(255, 99, 71);Then somewhere inside the Color we keep a private static Map with all the objects already instantiated:
class Color {
private static final Map<Integer, Color> CACHE =
new HashMap<>();
private final int hex;
static Color makeFromPalette(int red, int green, int blue) {
final int hex = red << 16 + green << 8 + blue;
return Color.CACHE.computeIfAbsent(
hex, h -> new Color(h)
);
}
private Color(int h) {
return new Color(h);
}
}It is very effective performance-wise. With a small object like our Color the problem may not be so obvious, but when objects are bigger, their instantiation and garbage collection may waste a lot of time.
True.
However, there is an object-oriented way to solve this problem. We just introduce a new class Palette, which becomes a store of colors:
class Palette {
private final Map<Integer, Color> colors =
new HashMap<>();
Color take(int red, int green, int blue) {
final int hex = red << 16 + green << 8 + blue;
return this.computerIfAbsent(
hex, h -> new Color(h)
);
}
}Now, we make an instance of Palette once and ask it to return a color to us every time we need it:
Color tomato = palette.take(255, 99, 71);
// Later we will get the same instance:
Color red = palette.take(255, 99, 71);See, Joshua, no static methods, no static attributes.
They Can Subtype
Let’s say our class Color has a method lighter(), which is supposed to shift the color to the next available lighter one:
class Color {
protected final int hex;
Color(int h) {
this.hex = h;
}
public Color lighter() {
return new Color(hex + 0x111);
}
}However, sometimes it’s more desirable to pick the next lighter color through a set of available Pantone colors:
class PantoneColor extends Color {
private final PantoneName pantone;
PantoneColor(String name) {
this(new PantoneName(name));
}
PantoneColor(PantoneName name) {
this.pantone = name;
}
@Override
public Color lighter() {
return new PantoneColor(this.pantone.up());
}
}Then, we create a static factory method, which will decide which Color implementation is the most suitable for us:
class Color {
private final String code;
static Color make(int h) {
if (h == 0xBF1932) {
return new PantoneColor("19-1664 TPX");
}
return new RGBColor(h);
}
}If the true red color is requested, we return an instance of PantoneColor. In all other cases it’s just a standard RGBColor. The decision is made by the static factory method. This is how we will call it:
Color color = Color.make(0xBF1932);It would not be possible to do the same “forking” with a constructor, since it can only return the class it is declared in. A static method has all the necessary freedom to return any subtype of Color.
True.
However, in an object-oriented world we can and must do it all differently. First, we would make Color an interface:
interface Color {
Color lighter();
}Next, we would move this decision making process to its own class Colors, just like we did in the previous example:
class Colors {
Color make(int h) {
if (h == 0xBF1932) {
return new PantoneColor("19-1664-TPX");
}
return new RGBColor(h);
}
}And we would use an instance of class Colors instead of a static faсtory method inside Color:
colors.make(0xBF1932);However, this is still not really an object-oriented way of thinking, because we’re taking the decision-making away from the object it belongs to. Either through a static factory method make() or a new class Colors—it doesn’t really matter how—we tear our objects into two pieces. The first piece is the object itself and the second one is the decision making algorithm that stays somewhere else.
A much more object-oriented design would be to put the logic into an object of class PantoneColor which would decorate the original RGBColor:
class PantoneColor {
private final Color origin;
PantoneColor(Color color) {
this.origin = color;
}
@Override
public Color lighter() {
final Color next;
if (this.origin.hex() == 0xBF1932) {
next = new RGBColor(0xD12631);
} else {
next = this.origin.lighter();
}
return new PantoneColor(next);
}
)Then, we make an instance of RGBColor and decorate it with PantoneColor:
Color red = new PantoneColor(
new RGBColor(0xBF1932)
);We ask red to return a lighter color and it returns the one from the Pantone palette, not the one that is merely lighter in RGB coordinates:
Color lighter = red.lighter(); // 0xD12631Of course, this example is rather primitive and needs further improvement if we really want it to be applicable to all Pantone colors, but I hope you get the idea. The logic must stay inside the class, not somewhere outside, not in static factory methods or even in some other supplementary class. I’m talking about the logic that belongs to this particular class, of course. If it’s something related to the management of class instances, then there can be containers and stores, just like in the previous example above.
To summarize, I would strongly recommend you never use static methods, especially when they are going to replace object constructors. Giving birth to an object through its constructor is the most “sacred” moment in any object-oriented software, don’t miss the beauty of it.
" /> Effective Java”: static factory methods are the preferred way to instantiate objects compared with constructors. I disagree. Not only because I believe that static methods are pure evil, but mostly because in this particular case they pretend to be good and make us think that we have to love them.
Let’s analyze the reasoning and see why it’s wrong, from an object-oriented point of view.
This is a class with one primary and two secondary constructors:
class Color {
private final int hex;
Color(String rgb) {
this(Integer.parseInt(rgb, 16));
}
Color(int red, int green, int blue) {
this(red << 16 + green << 8 + blue);
}
Color(int h) {
this.hex = h;
}
}This is a similar class with three static factory methods:
class Color {
private final int hex;
static Color makeFromRGB(String rgb) {
return new Color(Integer.parseInt(rgb, 16));
}
static Color makeFromPalette(int red, int green, int blue) {
return new Color(red << 16 + green << 8 + blue);
}
static Color makeFromHex(int h) {
return new Color(h);
}
private Color(int h) {
this.hex = h;
}
}Which one do you like better?
According to Joshua Bloch, there are three basic advantages to using static factory methods instead of constructors (there are actually four, but the fourth one is not applicable to Java anymore):
- They have names.
- They can cache.
- They can subtype.
I believe that all three make perfect sense … if the design is wrong. They are good excuses for workarounds. Let’s take them one by one.
They Have Names
This is how you make a red tomato color object with a constructor:
Color tomato = new Color(255, 99, 71);This is how you do it with a static factory method:
Color tomato = Color.makeFromPalette(255, 99, 71);It seems that makeFromPalette() is semantically richer than just new Color(), right? Well, yes. Who knows what those three numbers mean if we just pass them to the constructor. But the word “palette” helps us figure everything out immediately.
True.
However, the right solution would be to use polymorphism and encapsulation, to decompose the problem into a few semantically rich classes:
interface Color {
}
class HexColor implements Color {
private final int hex;
HexColor(int h) {
this.hex = h;
}
}
class RGBColor implements Color {
private final Color origin;
RGBColor(int red, int green, int blue) {
this.origin = new HexColor(
red << 16 + green << 8 + blue
);
}
}Now, we use the right constructor of the right class:
Color tomato = new RGBColor(255, 99, 71);See, Joshua?
They Can Cache
Let’s say I need a red tomato color in multiple places in the application:
Color tomato = new Color(255, 99, 71);
// ... sometime later
Color red = new Color(255, 99, 71);Two objects will be created, which is obviously inefficient, since they are identical. It would be better to keep the first instance somewhere in memory and return it when the second call arrives. Static factory methods make it possible to solve this very problem:
Color tomato = Color.makeFromPalette(255, 99, 71);
// ... sometime later
Color red = Color.makeFromPalette(255, 99, 71);Then somewhere inside the Color we keep a private static Map with all the objects already instantiated:
class Color {
private static final Map<Integer, Color> CACHE =
new HashMap<>();
private final int hex;
static Color makeFromPalette(int red, int green, int blue) {
final int hex = red << 16 + green << 8 + blue;
return Color.CACHE.computeIfAbsent(
hex, h -> new Color(h)
);
}
private Color(int h) {
return new Color(h);
}
}It is very effective performance-wise. With a small object like our Color the problem may not be so obvious, but when objects are bigger, their instantiation and garbage collection may waste a lot of time.
True.
However, there is an object-oriented way to solve this problem. We just introduce a new class Palette, which becomes a store of colors:
class Palette {
private final Map<Integer, Color> colors =
new HashMap<>();
Color take(int red, int green, int blue) {
final int hex = red << 16 + green << 8 + blue;
return this.computerIfAbsent(
hex, h -> new Color(h)
);
}
}Now, we make an instance of Palette once and ask it to return a color to us every time we need it:
Color tomato = palette.take(255, 99, 71);
// Later we will get the same instance:
Color red = palette.take(255, 99, 71);See, Joshua, no static methods, no static attributes.
They Can Subtype
Let’s say our class Color has a method lighter(), which is supposed to shift the color to the next available lighter one:
class Color {
protected final int hex;
Color(int h) {
this.hex = h;
}
public Color lighter() {
return new Color(hex + 0x111);
}
}However, sometimes it’s more desirable to pick the next lighter color through a set of available Pantone colors:
class PantoneColor extends Color {
private final PantoneName pantone;
PantoneColor(String name) {
this(new PantoneName(name));
}
PantoneColor(PantoneName name) {
this.pantone = name;
}
@Override
public Color lighter() {
return new PantoneColor(this.pantone.up());
}
}Then, we create a static factory method, which will decide which Color implementation is the most suitable for us:
class Color {
private final String code;
static Color make(int h) {
if (h == 0xBF1932) {
return new PantoneColor("19-1664 TPX");
}
return new RGBColor(h);
}
}If the true red color is requested, we return an instance of PantoneColor. In all other cases it’s just a standard RGBColor. The decision is made by the static factory method. This is how we will call it:
Color color = Color.make(0xBF1932);It would not be possible to do the same “forking” with a constructor, since it can only return the class it is declared in. A static method has all the necessary freedom to return any subtype of Color.
True.
However, in an object-oriented world we can and must do it all differently. First, we would make Color an interface:
interface Color {
Color lighter();
}Next, we would move this decision making process to its own class Colors, just like we did in the previous example:
class Colors {
Color make(int h) {
if (h == 0xBF1932) {
return new PantoneColor("19-1664-TPX");
}
return new RGBColor(h);
}
}And we would use an instance of class Colors instead of a static faсtory method inside Color:
colors.make(0xBF1932);However, this is still not really an object-oriented way of thinking, because we’re taking the decision-making away from the object it belongs to. Either through a static factory method make() or a new class Colors—it doesn’t really matter how—we tear our objects into two pieces. The first piece is the object itself and the second one is the decision making algorithm that stays somewhere else.
A much more object-oriented design would be to put the logic into an object of class PantoneColor which would decorate the original RGBColor:
class PantoneColor {
private final Color origin;
PantoneColor(Color color) {
this.origin = color;
}
@Override
public Color lighter() {
final Color next;
if (this.origin.hex() == 0xBF1932) {
next = new RGBColor(0xD12631);
} else {
next = this.origin.lighter();
}
return new PantoneColor(next);
}
)Then, we make an instance of RGBColor and decorate it with PantoneColor:
Color red = new PantoneColor(
new RGBColor(0xBF1932)
);We ask red to return a lighter color and it returns the one from the Pantone palette, not the one that is merely lighter in RGB coordinates:
Color lighter = red.lighter(); // 0xD12631Of course, this example is rather primitive and needs further improvement if we really want it to be applicable to all Pantone colors, but I hope you get the idea. The logic must stay inside the class, not somewhere outside, not in static factory methods or even in some other supplementary class. I’m talking about the logic that belongs to this particular class, of course. If it’s something related to the management of class instances, then there can be containers and stores, just like in the previous example above.
To summarize, I would strongly recommend you never use static methods, especially when they are going to replace object constructors. Giving birth to an object through its constructor is the most “sacred” moment in any object-oriented software, don’t miss the beauty of it.
"/>
https://www.yegor256.com/2017/11/14/static-factory-methods.html
Constructors or Static Factory Methods?
- Odessa, Ukraine
- Yegor Bugayenko
- comments
- Discussed at:
- dzone
I believe Joshua Bloch said it first in his very good book “Effective Java”: static factory methods are the preferred way to instantiate objects compared with constructors. I disagree. Not only because I believe that static methods are pure evil, but mostly because in this particular case they pretend to be good and make us think that we have to love them.

Let’s analyze the reasoning and see why it’s wrong, from an object-oriented point of view.
This is a class with one primary and two secondary constructors:
class Color {
private final int hex;
Color(String rgb) {
this(Integer.parseInt(rgb, 16));
}
Color(int red, int green, int blue) {
this(red << 16 + green << 8 + blue);
}
Color(int h) {
this.hex = h;
}
}This is a similar class with three static factory methods:
class Color {
private final int hex;
static Color makeFromRGB(String rgb) {
return new Color(Integer.parseInt(rgb, 16));
}
static Color makeFromPalette(int red, int green, int blue) {
return new Color(red << 16 + green << 8 + blue);
}
static Color makeFromHex(int h) {
return new Color(h);
}
private Color(int h) {
this.hex = h;
}
}Which one do you like better?
According to Joshua Bloch, there are three basic advantages to using static factory methods instead of constructors (there are actually four, but the fourth one is not applicable to Java anymore):
- They have names.
- They can cache.
- They can subtype.
I believe that all three make perfect sense … if the design is wrong. They are good excuses for workarounds. Let’s take them one by one.
They Have Names
This is how you make a red tomato color object with a constructor:
Color tomato = new Color(255, 99, 71);This is how you do it with a static factory method:
Color tomato = Color.makeFromPalette(255, 99, 71);It seems that makeFromPalette() is semantically richer than just new Color(), right? Well, yes. Who knows what those three numbers mean if we just pass them to the constructor. But the word “palette” helps us figure everything out immediately.
True.
However, the right solution would be to use polymorphism and encapsulation, to decompose the problem into a few semantically rich classes:
interface Color {
}
class HexColor implements Color {
private final int hex;
HexColor(int h) {
this.hex = h;
}
}
class RGBColor implements Color {
private final Color origin;
RGBColor(int red, int green, int blue) {
this.origin = new HexColor(
red << 16 + green << 8 + blue
);
}
}Now, we use the right constructor of the right class:
Color tomato = new RGBColor(255, 99, 71);See, Joshua?
They Can Cache
Let’s say I need a red tomato color in multiple places in the application:
Color tomato = new Color(255, 99, 71);
// ... sometime later
Color red = new Color(255, 99, 71);Two objects will be created, which is obviously inefficient, since they are identical. It would be better to keep the first instance somewhere in memory and return it when the second call arrives. Static factory methods make it possible to solve this very problem:
Color tomato = Color.makeFromPalette(255, 99, 71);
// ... sometime later
Color red = Color.makeFromPalette(255, 99, 71);Then somewhere inside the Color we keep a private static Map with all the objects already instantiated:
class Color {
private static final Map<Integer, Color> CACHE =
new HashMap<>();
private final int hex;
static Color makeFromPalette(int red, int green, int blue) {
final int hex = red << 16 + green << 8 + blue;
return Color.CACHE.computeIfAbsent(
hex, h -> new Color(h)
);
}
private Color(int h) {
return new Color(h);
}
}It is very effective performance-wise. With a small object like our Color the problem may not be so obvious, but when objects are bigger, their instantiation and garbage collection may waste a lot of time.
True.
However, there is an object-oriented way to solve this problem. We just introduce a new class Palette, which becomes a store of colors:
class Palette {
private final Map<Integer, Color> colors =
new HashMap<>();
Color take(int red, int green, int blue) {
final int hex = red << 16 + green << 8 + blue;
return this.computerIfAbsent(
hex, h -> new Color(h)
);
}
}Now, we make an instance of Palette once and ask it to return a color to us every time we need it:
Color tomato = palette.take(255, 99, 71);
// Later we will get the same instance:
Color red = palette.take(255, 99, 71);See, Joshua, no static methods, no static attributes.
They Can Subtype
Let’s say our class Color has a method lighter(), which is supposed to shift the color to the next available lighter one:
class Color {
protected final int hex;
Color(int h) {
this.hex = h;
}
public Color lighter() {
return new Color(hex + 0x111);
}
}However, sometimes it’s more desirable to pick the next lighter color through a set of available Pantone colors:
class PantoneColor extends Color {
private final PantoneName pantone;
PantoneColor(String name) {
this(new PantoneName(name));
}
PantoneColor(PantoneName name) {
this.pantone = name;
}
@Override
public Color lighter() {
return new PantoneColor(this.pantone.up());
}
}Then, we create a static factory method, which will decide which Color implementation is the most suitable for us:
class Color {
private final String code;
static Color make(int h) {
if (h == 0xBF1932) {
return new PantoneColor("19-1664 TPX");
}
return new RGBColor(h);
}
}If the true red color is requested, we return an instance of PantoneColor. In all other cases it’s just a standard RGBColor. The decision is made by the static factory method. This is how we will call it:
Color color = Color.make(0xBF1932);It would not be possible to do the same “forking” with a constructor, since it can only return the class it is declared in. A static method has all the necessary freedom to return any subtype of Color.
True.
However, in an object-oriented world we can and must do it all differently. First, we would make Color an interface:
interface Color {
Color lighter();
}Next, we would move this decision making process to its own class Colors, just like we did in the previous example:
class Colors {
Color make(int h) {
if (h == 0xBF1932) {
return new PantoneColor("19-1664-TPX");
}
return new RGBColor(h);
}
}And we would use an instance of class Colors instead of a static faсtory method inside Color:
colors.make(0xBF1932);However, this is still not really an object-oriented way of thinking, because we’re taking the decision-making away from the object it belongs to. Either through a static factory method make() or a new class Colors—it doesn’t really matter how—we tear our objects into two pieces. The first piece is the object itself and the second one is the decision making algorithm that stays somewhere else.
A much more object-oriented design would be to put the logic into an object of class PantoneColor which would decorate the original RGBColor:
class PantoneColor {
private final Color origin;
PantoneColor(Color color) {
this.origin = color;
}
@Override
public Color lighter() {
final Color next;
if (this.origin.hex() == 0xBF1932) {
next = new RGBColor(0xD12631);
} else {
next = this.origin.lighter();
}
return new PantoneColor(next);
}
)Then, we make an instance of RGBColor and decorate it with PantoneColor:
Color red = new PantoneColor(
new RGBColor(0xBF1932)
);We ask red to return a lighter color and it returns the one from the Pantone palette, not the one that is merely lighter in RGB coordinates:
Color lighter = red.lighter(); // 0xD12631Of course, this example is rather primitive and needs further improvement if we really want it to be applicable to all Pantone colors, but I hope you get the idea. The logic must stay inside the class, not somewhere outside, not in static factory methods or even in some other supplementary class. I’m talking about the logic that belongs to this particular class, of course. If it’s something related to the management of class instances, then there can be containers and stores, just like in the previous example above.
To summarize, I would strongly recommend you never use static methods, especially when they are going to replace object constructors. Giving birth to an object through its constructor is the most “sacred” moment in any object-oriented software, don’t miss the beauty of it.
I believe Joshua Bloch said it first in his very good book “Effective Java”: static factory methods are the preferred way to instantiate objects compared with constructors. I disagree. Not only because I believe that static methods are pure evil, but mostly because in this particular case they pretend to be good and make us think that we have to love them.

Let’s analyze the reasoning and see why it’s wrong, from an object-oriented point of view.
This is a class with one primary and two secondary constructors:
class Color {
private final int hex;
Color(String rgb) {
this(Integer.parseInt(rgb, 16));
}
Color(int red, int green, int blue) {
this(red << 16 + green << 8 + blue);
}
Color(int h) {
this.hex = h;
}
}This is a similar class with three static factory methods:
class Color {
private final int hex;
static Color makeFromRGB(String rgb) {
return new Color(Integer.parseInt(rgb, 16));
}
static Color makeFromPalette(int red, int green, int blue) {
return new Color(red << 16 + green << 8 + blue);
}
static Color makeFromHex(int h) {
return new Color(h);
}
private Color(int h) {
this.hex = h;
}
}Which one do you like better?
According to Joshua Bloch, there are three basic advantages to using static factory methods instead of constructors (there are actually four, but the fourth one is not applicable to Java anymore):
- They have names.
- They can cache.
- They can subtype.
I believe that all three make perfect sense … if the design is wrong. They are good excuses for workarounds. Let’s take them one by one.
They Have Names
This is how you make a red tomato color object with a constructor:
Color tomato = new Color(255, 99, 71);This is how you do it with a static factory method:
Color tomato = Color.makeFromPalette(255, 99, 71);It seems that makeFromPalette() is semantically richer than just new Color(), right? Well, yes. Who knows what those three numbers mean if we just pass them to the constructor. But the word “palette” helps us figure everything out immediately.
True.
However, the right solution would be to use polymorphism and encapsulation, to decompose the problem into a few semantically rich classes:
interface Color {
}
class HexColor implements Color {
private final int hex;
HexColor(int h) {
this.hex = h;
}
}
class RGBColor implements Color {
private final Color origin;
RGBColor(int red, int green, int blue) {
this.origin = new HexColor(
red << 16 + green << 8 + blue
);
}
}Now, we use the right constructor of the right class:
Color tomato = new RGBColor(255, 99, 71);See, Joshua?
They Can Cache
Let’s say I need a red tomato color in multiple places in the application:
Color tomato = new Color(255, 99, 71);
// ... sometime later
Color red = new Color(255, 99, 71);Two objects will be created, which is obviously inefficient, since they are identical. It would be better to keep the first instance somewhere in memory and return it when the second call arrives. Static factory methods make it possible to solve this very problem:
Color tomato = Color.makeFromPalette(255, 99, 71);
// ... sometime later
Color red = Color.makeFromPalette(255, 99, 71);Then somewhere inside the Color we keep a private static Map with all the objects already instantiated:
class Color {
private static final Map<Integer, Color> CACHE =
new HashMap<>();
private final int hex;
static Color makeFromPalette(int red, int green, int blue) {
final int hex = red << 16 + green << 8 + blue;
return Color.CACHE.computeIfAbsent(
hex, h -> new Color(h)
);
}
private Color(int h) {
return new Color(h);
}
}It is very effective performance-wise. With a small object like our Color the problem may not be so obvious, but when objects are bigger, their instantiation and garbage collection may waste a lot of time.
True.
However, there is an object-oriented way to solve this problem. We just introduce a new class Palette, which becomes a store of colors:
class Palette {
private final Map<Integer, Color> colors =
new HashMap<>();
Color take(int red, int green, int blue) {
final int hex = red << 16 + green << 8 + blue;
return this.computerIfAbsent(
hex, h -> new Color(h)
);
}
}Now, we make an instance of Palette once and ask it to return a color to us every time we need it:
Color tomato = palette.take(255, 99, 71);
// Later we will get the same instance:
Color red = palette.take(255, 99, 71);See, Joshua, no static methods, no static attributes.
They Can Subtype
Let’s say our class Color has a method lighter(), which is supposed to shift the color to the next available lighter one:
class Color {
protected final int hex;
Color(int h) {
this.hex = h;
}
public Color lighter() {
return new Color(hex + 0x111);
}
}However, sometimes it’s more desirable to pick the next lighter color through a set of available Pantone colors:
class PantoneColor extends Color {
private final PantoneName pantone;
PantoneColor(String name) {
this(new PantoneName(name));
}
PantoneColor(PantoneName name) {
this.pantone = name;
}
@Override
public Color lighter() {
return new PantoneColor(this.pantone.up());
}
}Then, we create a static factory method, which will decide which Color implementation is the most suitable for us:
class Color {
private final String code;
static Color make(int h) {
if (h == 0xBF1932) {
return new PantoneColor("19-1664 TPX");
}
return new RGBColor(h);
}
}If the true red color is requested, we return an instance of PantoneColor. In all other cases it’s just a standard RGBColor. The decision is made by the static factory method. This is how we will call it:
Color color = Color.make(0xBF1932);It would not be possible to do the same “forking” with a constructor, since it can only return the class it is declared in. A static method has all the necessary freedom to return any subtype of Color.
True.
However, in an object-oriented world we can and must do it all differently. First, we would make Color an interface:
interface Color {
Color lighter();
}Next, we would move this decision making process to its own class Colors, just like we did in the previous example:
class Colors {
Color make(int h) {
if (h == 0xBF1932) {
return new PantoneColor("19-1664-TPX");
}
return new RGBColor(h);
}
}And we would use an instance of class Colors instead of a static faсtory method inside Color:
colors.make(0xBF1932);However, this is still not really an object-oriented way of thinking, because we’re taking the decision-making away from the object it belongs to. Either through a static factory method make() or a new class Colors—it doesn’t really matter how—we tear our objects into two pieces. The first piece is the object itself and the second one is the decision making algorithm that stays somewhere else.
A much more object-oriented design would be to put the logic into an object of class PantoneColor which would decorate the original RGBColor:
class PantoneColor {
private final Color origin;
PantoneColor(Color color) {
this.origin = color;
}
@Override
public Color lighter() {
final Color next;
if (this.origin.hex() == 0xBF1932) {
next = new RGBColor(0xD12631);
} else {
next = this.origin.lighter();
}
return new PantoneColor(next);
}
)Then, we make an instance of RGBColor and decorate it with PantoneColor:
Color red = new PantoneColor(
new RGBColor(0xBF1932)
);We ask red to return a lighter color and it returns the one from the Pantone palette, not the one that is merely lighter in RGB coordinates:
Color lighter = red.lighter(); // 0xD12631Of course, this example is rather primitive and needs further improvement if we really want it to be applicable to all Pantone colors, but I hope you get the idea. The logic must stay inside the class, not somewhere outside, not in static factory methods or even in some other supplementary class. I’m talking about the logic that belongs to this particular class, of course. If it’s something related to the management of class instances, then there can be containers and stores, just like in the previous example above.
To summarize, I would strongly recommend you never use static methods, especially when they are going to replace object constructors. Giving birth to an object through its constructor is the most “sacred” moment in any object-oriented software, don’t miss the beauty of it.
Please, use syntax highlighting in your comments, to make them more readable.
Client.
Let me give an example first. This is what an object with such a suffix may look like (it’s a pseudo-code version of the AmazonS3Client from AWS Java SDK):
class AmazonS3Client {
createBucket(String name);
deleteBucket(String name);
doesBucketExist(String name);
getBucketAcl(String name)
getBucketPolicy(String name);
listBuckets();
// 160+ more methods here
}All “clients” look similar: they encapsulate the destination URL with some access credentials and expose a number of methods, which transport the data to/from the “server.” Even though this design looks like a proper object, it doesn’t really follow the true spirit of object-orientation. That’s why it’s not as maintainable as it should be, for two reasons:
Its scope is too broad. Since the client is an abstraction of a server, it inevitably has to represent the server’s entire functionality. When the functionality is rather limited there is no issue. Take
HttpClientfrom Apache HttpComponents as an example. However, when the server is more complex, the size of the client also grows. There are over 160 (!) methods inAmazonS3Clientat the time of writing, while it started with only a few dozen just a fewyearshundred versions ago.It is data focused. The very idea of a client-server relationship is about transferring data. Take the HTTP RESTful API of the AWS S3 service as an example. There are entities on the AWS side: buckets, objects, versions, access control policies, etc., and the server turns them into JSON/XML data. Then the data comes to us and the client on our side deals with JSON or XML. It inevitably remains data for us and never really becomes buckets, objects, or versions.
The consequences depend on the situation, but these are the most probable:
Procedural code. Since the client returns the data, the code that works with that data will most likely be procedural. Look at the results AWS SDK methods return, they all look like objects, but in reality they are just data structures:
S3Object,ObjectMetadata,BucketPolicy,PutObjectResult, etc. They are all Data Transfer Objects with only getters and setters inside.Duplicated code. If we actually decide to stay object-oriented we will have to turn the data the client returns to us into objects. Most likely this will lead to code duplication in multiple projects. I had that too, when I started to work with S3 SDK. Very soon I realized that in order to avoid duplication I’d better create a library that does the job of converting S3 SDK data into objects: jcabi-s3.
Difficulties with testing. Since the client is in most cases a rather big class/interface, mocking it in unit tests or creating its test doubles/fakes is a rather complex task.
Static problems. Client classes, even though their methods are not static, look very similar to utility classes, which are well known for being anti-OOP. The issues we have with utility classes are almost the same as those we have with “client” classes.
Extendability issues. Needless to say, it’s almost impossible to decorate a client object when it has 160+ methods and keeps on growing. The only possible way to add new functionality to it is by creating new methods. Eventually we get a monster class that can’t be re-used anyhow without modification.
What is the alternative?
The right design would be to replace “clients” with client-side objects that represent entities of the server side, not the entire server. For example, with the S3 SDK, that could be Bucket, Object, Version, Policy, etc. Each of them exposes the functionality of real buckets, objects and versions, which the AWS S3 can expose.
Of course, we will need a high-level object that somehow represents the entire API/server, but it should be small. For example, in the S3 SDK example it could be called Region, which means the entire AWS region with buckets. Then we could retrieve a bucket from it and won’t need a region anymore. Then, to list objects in the bucket we ask the bucket to do it for us. No need to communicate with the entire “server object” every time, even though technically such a communication happens, of course.
To summarize, the trouble is not exactly in the name suffix, but in the very idea of representing the entire server on the client side rather than its entities. Such an abstraction is 1) too big and 2) very data driven.
BTW, check out some of the JCabi libraries (Java) for examples of object-oriented clients without “client” objects: jcabi-github, jcabi-dynamo, jcabi-s3, or jcabi-simpledb.
" /> were talking about “-ER” suffixes in object and class names. We agreed that they were evil and must be avoided if we want our code to be truly object-oriented and our objects to be objects instead of collections of procedures. Now I’m ready to introduce a new evil suffix:Client.
Let me give an example first. This is what an object with such a suffix may look like (it’s a pseudo-code version of the AmazonS3Client from AWS Java SDK):
class AmazonS3Client {
createBucket(String name);
deleteBucket(String name);
doesBucketExist(String name);
getBucketAcl(String name)
getBucketPolicy(String name);
listBuckets();
// 160+ more methods here
}All “clients” look similar: they encapsulate the destination URL with some access credentials and expose a number of methods, which transport the data to/from the “server.” Even though this design looks like a proper object, it doesn’t really follow the true spirit of object-orientation. That’s why it’s not as maintainable as it should be, for two reasons:
Its scope is too broad. Since the client is an abstraction of a server, it inevitably has to represent the server’s entire functionality. When the functionality is rather limited there is no issue. Take
HttpClientfrom Apache HttpComponents as an example. However, when the server is more complex, the size of the client also grows. There are over 160 (!) methods inAmazonS3Clientat the time of writing, while it started with only a few dozen just a fewyearshundred versions ago.It is data focused. The very idea of a client-server relationship is about transferring data. Take the HTTP RESTful API of the AWS S3 service as an example. There are entities on the AWS side: buckets, objects, versions, access control policies, etc., and the server turns them into JSON/XML data. Then the data comes to us and the client on our side deals with JSON or XML. It inevitably remains data for us and never really becomes buckets, objects, or versions.
The consequences depend on the situation, but these are the most probable:
Procedural code. Since the client returns the data, the code that works with that data will most likely be procedural. Look at the results AWS SDK methods return, they all look like objects, but in reality they are just data structures:
S3Object,ObjectMetadata,BucketPolicy,PutObjectResult, etc. They are all Data Transfer Objects with only getters and setters inside.Duplicated code. If we actually decide to stay object-oriented we will have to turn the data the client returns to us into objects. Most likely this will lead to code duplication in multiple projects. I had that too, when I started to work with S3 SDK. Very soon I realized that in order to avoid duplication I’d better create a library that does the job of converting S3 SDK data into objects: jcabi-s3.
Difficulties with testing. Since the client is in most cases a rather big class/interface, mocking it in unit tests or creating its test doubles/fakes is a rather complex task.
Static problems. Client classes, even though their methods are not static, look very similar to utility classes, which are well known for being anti-OOP. The issues we have with utility classes are almost the same as those we have with “client” classes.
Extendability issues. Needless to say, it’s almost impossible to decorate a client object when it has 160+ methods and keeps on growing. The only possible way to add new functionality to it is by creating new methods. Eventually we get a monster class that can’t be re-used anyhow without modification.
What is the alternative?
The right design would be to replace “clients” with client-side objects that represent entities of the server side, not the entire server. For example, with the S3 SDK, that could be Bucket, Object, Version, Policy, etc. Each of them exposes the functionality of real buckets, objects and versions, which the AWS S3 can expose.
Of course, we will need a high-level object that somehow represents the entire API/server, but it should be small. For example, in the S3 SDK example it could be called Region, which means the entire AWS region with buckets. Then we could retrieve a bucket from it and won’t need a region anymore. Then, to list objects in the bucket we ask the bucket to do it for us. No need to communicate with the entire “server object” every time, even though technically such a communication happens, of course.
To summarize, the trouble is not exactly in the name suffix, but in the very idea of representing the entire server on the client side rather than its entities. Such an abstraction is 1) too big and 2) very data driven.
BTW, check out some of the JCabi libraries (Java) for examples of object-oriented clients without “client” objects: jcabi-github, jcabi-dynamo, jcabi-s3, or jcabi-simpledb.
"/>
https://www.yegor256.com/2017/09/12/evil-object-name-suffix-client.html
Yet Another Evil Suffix For Object Names: Client
- Odessa, Ukraine
- Yegor Bugayenko
- comments
Some time ago we were talking about “-ER” suffixes in object and class names. We agreed that they were evil and must be avoided if we want our code to be truly object-oriented and our objects to be objects instead of collections of procedures. Now I’m ready to introduce a new evil suffix: Client.

Let me give an example first. This is what an object with such a suffix may look like (it’s a pseudo-code version of the AmazonS3Client from AWS Java SDK):
class AmazonS3Client {
createBucket(String name);
deleteBucket(String name);
doesBucketExist(String name);
getBucketAcl(String name)
getBucketPolicy(String name);
listBuckets();
// 160+ more methods here
}All “clients” look similar: they encapsulate the destination URL with some access credentials and expose a number of methods, which transport the data to/from the “server.” Even though this design looks like a proper object, it doesn’t really follow the true spirit of object-orientation. That’s why it’s not as maintainable as it should be, for two reasons:
Its scope is too broad. Since the client is an abstraction of a server, it inevitably has to represent the server’s entire functionality. When the functionality is rather limited there is no issue. Take
HttpClientfrom Apache HttpComponents as an example. However, when the server is more complex, the size of the client also grows. There are over 160 (!) methods inAmazonS3Clientat the time of writing, while it started with only a few dozen just a fewyearshundred versions ago.It is data focused. The very idea of a client-server relationship is about transferring data. Take the HTTP RESTful API of the AWS S3 service as an example. There are entities on the AWS side: buckets, objects, versions, access control policies, etc., and the server turns them into JSON/XML data. Then the data comes to us and the client on our side deals with JSON or XML. It inevitably remains data for us and never really becomes buckets, objects, or versions.
The consequences depend on the situation, but these are the most probable:
Procedural code. Since the client returns the data, the code that works with that data will most likely be procedural. Look at the results AWS SDK methods return, they all look like objects, but in reality they are just data structures:
S3Object,ObjectMetadata,BucketPolicy,PutObjectResult, etc. They are all Data Transfer Objects with only getters and setters inside.Duplicated code. If we actually decide to stay object-oriented we will have to turn the data the client returns to us into objects. Most likely this will lead to code duplication in multiple projects. I had that too, when I started to work with S3 SDK. Very soon I realized that in order to avoid duplication I’d better create a library that does the job of converting S3 SDK data into objects: jcabi-s3.
Difficulties with testing. Since the client is in most cases a rather big class/interface, mocking it in unit tests or creating its test doubles/fakes is a rather complex task.
Static problems. Client classes, even though their methods are not static, look very similar to utility classes, which are well known for being anti-OOP. The issues we have with utility classes are almost the same as those we have with “client” classes.
Extendability issues. Needless to say, it’s almost impossible to decorate a client object when it has 160+ methods and keeps on growing. The only possible way to add new functionality to it is by creating new methods. Eventually we get a monster class that can’t be re-used anyhow without modification.
What is the alternative?
The right design would be to replace “clients” with client-side objects that represent entities of the server side, not the entire server. For example, with the S3 SDK, that could be Bucket, Object, Version, Policy, etc. Each of them exposes the functionality of real buckets, objects and versions, which the AWS S3 can expose.
Of course, we will need a high-level object that somehow represents the entire API/server, but it should be small. For example, in the S3 SDK example it could be called Region, which means the entire AWS region with buckets. Then we could retrieve a bucket from it and won’t need a region anymore. Then, to list objects in the bucket we ask the bucket to do it for us. No need to communicate with the entire “server object” every time, even though technically such a communication happens, of course.
To summarize, the trouble is not exactly in the name suffix, but in the very idea of representing the entire server on the client side rather than its entities. Such an abstraction is 1) too big and 2) very data driven.
BTW, check out some of the JCabi libraries (Java) for examples of object-oriented clients without “client” objects: jcabi-github, jcabi-dynamo, jcabi-s3, or jcabi-simpledb.
Some time ago we were talking about “-ER” suffixes in object and class names. We agreed that they were evil and must be avoided if we want our code to be truly object-oriented and our objects to be objects instead of collections of procedures. Now I’m ready to introduce a new evil suffix: Client.

Let me give an example first. This is what an object with such a suffix may look like (it’s a pseudo-code version of the AmazonS3Client from AWS Java SDK):
class AmazonS3Client {
createBucket(String name);
deleteBucket(String name);
doesBucketExist(String name);
getBucketAcl(String name)
getBucketPolicy(String name);
listBuckets();
// 160+ more methods here
}All “clients” look similar: they encapsulate the destination URL with some access credentials and expose a number of methods, which transport the data to/from the “server.” Even though this design looks like a proper object, it doesn’t really follow the true spirit of object-orientation. That’s why it’s not as maintainable as it should be, for two reasons:
Its scope is too broad. Since the client is an abstraction of a server, it inevitably has to represent the server’s entire functionality. When the functionality is rather limited there is no issue. Take
HttpClientfrom Apache HttpComponents as an example. However, when the server is more complex, the size of the client also grows. There are over 160 (!) methods inAmazonS3Clientat the time of writing, while it started with only a few dozen just a fewyearshundred versions ago.It is data focused. The very idea of a client-server relationship is about transferring data. Take the HTTP RESTful API of the AWS S3 service as an example. There are entities on the AWS side: buckets, objects, versions, access control policies, etc., and the server turns them into JSON/XML data. Then the data comes to us and the client on our side deals with JSON or XML. It inevitably remains data for us and never really becomes buckets, objects, or versions.
The consequences depend on the situation, but these are the most probable:
Procedural code. Since the client returns the data, the code that works with that data will most likely be procedural. Look at the results AWS SDK methods return, they all look like objects, but in reality they are just data structures:
S3Object,ObjectMetadata,BucketPolicy,PutObjectResult, etc. They are all Data Transfer Objects with only getters and setters inside.Duplicated code. If we actually decide to stay object-oriented we will have to turn the data the client returns to us into objects. Most likely this will lead to code duplication in multiple projects. I had that too, when I started to work with S3 SDK. Very soon I realized that in order to avoid duplication I’d better create a library that does the job of converting S3 SDK data into objects: jcabi-s3.
Difficulties with testing. Since the client is in most cases a rather big class/interface, mocking it in unit tests or creating its test doubles/fakes is a rather complex task.
Static problems. Client classes, even though their methods are not static, look very similar to utility classes, which are well known for being anti-OOP. The issues we have with utility classes are almost the same as those we have with “client” classes.
Extendability issues. Needless to say, it’s almost impossible to decorate a client object when it has 160+ methods and keeps on growing. The only possible way to add new functionality to it is by creating new methods. Eventually we get a monster class that can’t be re-used anyhow without modification.
What is the alternative?
The right design would be to replace “clients” with client-side objects that represent entities of the server side, not the entire server. For example, with the S3 SDK, that could be Bucket, Object, Version, Policy, etc. Each of them exposes the functionality of real buckets, objects and versions, which the AWS S3 can expose.
Of course, we will need a high-level object that somehow represents the entire API/server, but it should be small. For example, in the S3 SDK example it could be called Region, which means the entire AWS region with buckets. Then we could retrieve a bucket from it and won’t need a region anymore. Then, to list objects in the bucket we ask the bucket to do it for us. No need to communicate with the entire “server object” every time, even though technically such a communication happens, of course.
To summarize, the trouble is not exactly in the name suffix, but in the very idea of representing the entire server on the client side rather than its entities. Such an abstraction is 1) too big and 2) very data driven.
BTW, check out some of the JCabi libraries (Java) for examples of object-oriented clients without “client” objects: jcabi-github, jcabi-dynamo, jcabi-s3, or jcabi-simpledb.
Please, use syntax highlighting in your comments, to make them more readable.

The problem RAII is solving is obvious; have a look at this code (I’m sure you know what Semaphore is and how it works in Java):
class Foo {
private Semaphore sem = new Semaphore(5);
void print(int x) throws Exception {
this.sem.acquire();
if (x > 1000) {
throw new Exception("Too large!");
}
System.out.printf("x = %d", x);
this.sem.release();
}
}The code is rather primitive and doesn’t do anything useful, but you most probably get the idea: the method print(), if being called from multiple parallel threads, will allow only five of them to print in parallel. Sometimes it will not allow some of them to print and will throw an exception if x is bigger than 1000.
The problem with this code is—resource leakage. Each print() call with x larger than 1000 will take one permit from the semaphore and won’t return it. In five calls with exceptions the semaphore will be empty and all other threads won’t print anything.
What is the solution? Here it is:
class Foo {
private Semaphore sem = new Semaphore(5);
void print(int x) throws Exception {
this.sem.acquire();
if (x > 1000) {
this.sem.release();
throw new Exception("Too large!");
}
System.out.printf("x = %d", x);
this.sem.release();
}
}We must release the permit before we throw the exception.
However, there is another problem that shows up: code duplication. We release the permit in two places. If we add more throw instructions we will also have to add more sem.release() calls.
A very elegant solution was introduced in C++ and is called RAII. This is how it would look in Java:
class Permit {
private Semaphore sem;
Permit(Semaphore s) {
this.sem = s;
this.sem.acquire();
}
@Override
public void finalize() {
this.sem.release();
}
}
class Foo {
private Semaphore sem = new Semaphore(5);
void print(int x) throws Exception {
new Permit(this.sem);
if (x > 1000) {
throw new Exception("Too large!");
}
System.out.printf("x = %d", x);
}
}See how beautiful the code is inside method Foo.print(). We just create an instance of class Permit and it immediately acquires a new permit at the semaphore. Then we exit the method print(), either by exception or in the normal way, and the method Permit.finalize() releases the permit.
Elegant, isn’t it? Yes, it is, but it won’t work in Java.
It won’t work because, unlike C++, Java doesn’t destroy objects when their scope of visibility is closed. The object of class Permit won’t be destroyed when we exit the method print(). It will be destroyed eventually but we don’t know when exactly. Most likely it will be destroyed way after all permits in the semaphore got acquired and we get blocked.
There is a solution in Java too. It is not as elegant as the one from C++, but it does work. Here it is:
class Permit implements Closeable {
private Semaphore sem;
Permit(Semaphore s) {
this.sem = s;
}
@Override
public void close() {
this.sem.release();
}
public Permit acquire() {
this.sem.acquire();
return this;
}
}
class Foo {
private Semaphore sem = new Semaphore(5);
void print(int x) throws Exception {
try (Permit p = new Permit(this.sem).acquire()) {
if (x > 1000) {
throw new Exception("Too large!");
}
System.out.printf("x = %d", x);
}
}
}Pay attention to the try block and to the Closeable interface that the class Permit now implements. The object p will be “closed” when the try block exits. It may exit either at the end, or by the return or throw statements. In either case Permit.close() will be called: this is how try-with-resources works in Java.
I introduced method acquire() and moved sem.acquire() out of the Permit constructor because I believe that constructors must be code-free.
To summarize, RAII is a perfect design pattern approach when you deal with resources that may leak. Even though Java doesn’t have it out of the box we can implement it via try-with-resources and Closeable.

The problem RAII is solving is obvious; have a look at this code (I’m sure you know what Semaphore is and how it works in Java):
class Foo {
private Semaphore sem = new Semaphore(5);
void print(int x) throws Exception {
this.sem.acquire();
if (x > 1000) {
throw new Exception("Too large!");
}
System.out.printf("x = %d", x);
this.sem.release();
}
}The code is rather primitive and doesn’t do anything useful, but you most probably get the idea: the method print(), if being called from multiple parallel threads, will allow only five of them to print in parallel. Sometimes it will not allow some of them to print and will throw an exception if x is bigger than 1000.
The problem with this code is—resource leakage. Each print() call with x larger than 1000 will take one permit from the semaphore and won’t return it. In five calls with exceptions the semaphore will be empty and all other threads won’t print anything.
What is the solution? Here it is:
class Foo {
private Semaphore sem = new Semaphore(5);
void print(int x) throws Exception {
this.sem.acquire();
if (x > 1000) {
this.sem.release();
throw new Exception("Too large!");
}
System.out.printf("x = %d", x);
this.sem.release();
}
}We must release the permit before we throw the exception.
However, there is another problem that shows up: code duplication. We release the permit in two places. If we add more throw instructions we will also have to add more sem.release() calls.
A very elegant solution was introduced in C++ and is called RAII. This is how it would look in Java:
class Permit {
private Semaphore sem;
Permit(Semaphore s) {
this.sem = s;
this.sem.acquire();
}
@Override
public void finalize() {
this.sem.release();
}
}
class Foo {
private Semaphore sem = new Semaphore(5);
void print(int x) throws Exception {
new Permit(this.sem);
if (x > 1000) {
throw new Exception("Too large!");
}
System.out.printf("x = %d", x);
}
}See how beautiful the code is inside method Foo.print(). We just create an instance of class Permit and it immediately acquires a new permit at the semaphore. Then we exit the method print(), either by exception or in the normal way, and the method Permit.finalize() releases the permit.
Elegant, isn’t it? Yes, it is, but it won’t work in Java.
It won’t work because, unlike C++, Java doesn’t destroy objects when their scope of visibility is closed. The object of class Permit won’t be destroyed when we exit the method print(). It will be destroyed eventually but we don’t know when exactly. Most likely it will be destroyed way after all permits in the semaphore got acquired and we get blocked.
There is a solution in Java too. It is not as elegant as the one from C++, but it does work. Here it is:
class Permit implements Closeable {
private Semaphore sem;
Permit(Semaphore s) {
this.sem = s;
}
@Override
public void close() {
this.sem.release();
}
public Permit acquire() {
this.sem.acquire();
return this;
}
}
class Foo {
private Semaphore sem = new Semaphore(5);
void print(int x) throws Exception {
try (Permit p = new Permit(this.sem).acquire()) {
if (x > 1000) {
throw new Exception("Too large!");
}
System.out.printf("x = %d", x);
}
}
}Pay attention to the try block and to the Closeable interface that the class Permit now implements. The object p will be “closed” when the try block exits. It may exit either at the end, or by the return or throw statements. In either case Permit.close() will be called: this is how try-with-resources works in Java.
I introduced method acquire() and moved sem.acquire() out of the Permit constructor because I believe that constructors must be code-free.
To summarize, RAII is a perfect design pattern approach when you deal with resources that may leak. Even though Java doesn’t have it out of the box we can implement it via try-with-resources and Closeable.
https://www.yegor256.com/2017/08/08/raii-in-java.html
RAII in Java
- Riga, Latvia
- Yegor Bugayenko
- comments
Resource Acquisition Is Initialization (RAII) is a design idea introduced in C++ by Bjarne Stroustrup for exception-safe resource management. Thanks to garbage collection Java doesn’t have this feature, but we can implement something similar, using try-with-resources.

The problem RAII is solving is obvious; have a look at this code (I’m sure you know what Semaphore is and how it works in Java):
class Foo {
private Semaphore sem = new Semaphore(5);
void print(int x) throws Exception {
this.sem.acquire();
if (x > 1000) {
throw new Exception("Too large!");
}
System.out.printf("x = %d", x);
this.sem.release();
}
}The code is rather primitive and doesn’t do anything useful, but you most probably get the idea: the method print(), if being called from multiple parallel threads, will allow only five of them to print in parallel. Sometimes it will not allow some of them to print and will throw an exception if x is bigger than 1000.
The problem with this code is—resource leakage. Each print() call with x larger than 1000 will take one permit from the semaphore and won’t return it. In five calls with exceptions the semaphore will be empty and all other threads won’t print anything.
What is the solution? Here it is:
class Foo {
private Semaphore sem = new Semaphore(5);
void print(int x) throws Exception {
this.sem.acquire();
if (x > 1000) {
this.sem.release();
throw new Exception("Too large!");
}
System.out.printf("x = %d", x);
this.sem.release();
}
}We must release the permit before we throw the exception.
However, there is another problem that shows up: code duplication. We release the permit in two places. If we add more throw instructions we will also have to add more sem.release() calls.
A very elegant solution was introduced in C++ and is called RAII. This is how it would look in Java:
class Permit {
private Semaphore sem;
Permit(Semaphore s) {
this.sem = s;
this.sem.acquire();
}
@Override
public void finalize() {
this.sem.release();
}
}
class Foo {
private Semaphore sem = new Semaphore(5);
void print(int x) throws Exception {
new Permit(this.sem);
if (x > 1000) {
throw new Exception("Too large!");
}
System.out.printf("x = %d", x);
}
}See how beautiful the code is inside method Foo.print(). We just create an instance of class Permit and it immediately acquires a new permit at the semaphore. Then we exit the method print(), either by exception or in the normal way, and the method Permit.finalize() releases the permit.
Elegant, isn’t it? Yes, it is, but it won’t work in Java.
It won’t work because, unlike C++, Java doesn’t destroy objects when their scope of visibility is closed. The object of class Permit won’t be destroyed when we exit the method print(). It will be destroyed eventually but we don’t know when exactly. Most likely it will be destroyed way after all permits in the semaphore got acquired and we get blocked.
There is a solution in Java too. It is not as elegant as the one from C++, but it does work. Here it is:
class Permit implements Closeable {
private Semaphore sem;
Permit(Semaphore s) {
this.sem = s;
}
@Override
public void close() {
this.sem.release();
}
public Permit acquire() {
this.sem.acquire();
return this;
}
}
class Foo {
private Semaphore sem = new Semaphore(5);
void print(int x) throws Exception {
try (Permit p = new Permit(this.sem).acquire()) {
if (x > 1000) {
throw new Exception("Too large!");
}
System.out.printf("x = %d", x);
}
}
}Pay attention to the try block and to the Closeable interface that the class Permit now implements. The object p will be “closed” when the try block exits. It may exit either at the end, or by the return or throw statements. In either case Permit.close() will be called: this is how try-with-resources works in Java.
I introduced method acquire() and moved sem.acquire() out of the Permit constructor because I believe that constructors must be code-free.
To summarize, RAII is a perfect design pattern approach when you deal with resources that may leak. Even though Java doesn’t have it out of the box we can implement it via try-with-resources and Closeable.
Resource Acquisition Is Initialization (RAII) is a design idea introduced in C++ by Bjarne Stroustrup for exception-safe resource management. Thanks to garbage collection Java doesn’t have this feature, but we can implement something similar, using try-with-resources.

The problem RAII is solving is obvious; have a look at this code (I’m sure you know what Semaphore is and how it works in Java):
class Foo {
private Semaphore sem = new Semaphore(5);
void print(int x) throws Exception {
this.sem.acquire();
if (x > 1000) {
throw new Exception("Too large!");
}
System.out.printf("x = %d", x);
this.sem.release();
}
}The code is rather primitive and doesn’t do anything useful, but you most probably get the idea: the method print(), if being called from multiple parallel threads, will allow only five of them to print in parallel. Sometimes it will not allow some of them to print and will throw an exception if x is bigger than 1000.
The problem with this code is—resource leakage. Each print() call with x larger than 1000 will take one permit from the semaphore and won’t return it. In five calls with exceptions the semaphore will be empty and all other threads won’t print anything.
What is the solution? Here it is:
class Foo {
private Semaphore sem = new Semaphore(5);
void print(int x) throws Exception {
this.sem.acquire();
if (x > 1000) {
this.sem.release();
throw new Exception("Too large!");
}
System.out.printf("x = %d", x);
this.sem.release();
}
}We must release the permit before we throw the exception.
However, there is another problem that shows up: code duplication. We release the permit in two places. If we add more throw instructions we will also have to add more sem.release() calls.
A very elegant solution was introduced in C++ and is called RAII. This is how it would look in Java:
class Permit {
private Semaphore sem;
Permit(Semaphore s) {
this.sem = s;
this.sem.acquire();
}
@Override
public void finalize() {
this.sem.release();
}
}
class Foo {
private Semaphore sem = new Semaphore(5);
void print(int x) throws Exception {
new Permit(this.sem);
if (x > 1000) {
throw new Exception("Too large!");
}
System.out.printf("x = %d", x);
}
}See how beautiful the code is inside method Foo.print(). We just create an instance of class Permit and it immediately acquires a new permit at the semaphore. Then we exit the method print(), either by exception or in the normal way, and the method Permit.finalize() releases the permit.
Elegant, isn’t it? Yes, it is, but it won’t work in Java.
It won’t work because, unlike C++, Java doesn’t destroy objects when their scope of visibility is closed. The object of class Permit won’t be destroyed when we exit the method print(). It will be destroyed eventually but we don’t know when exactly. Most likely it will be destroyed way after all permits in the semaphore got acquired and we get blocked.
There is a solution in Java too. It is not as elegant as the one from C++, but it does work. Here it is:
class Permit implements Closeable {
private Semaphore sem;
Permit(Semaphore s) {
this.sem = s;
}
@Override
public void close() {
this.sem.release();
}
public Permit acquire() {
this.sem.acquire();
return this;
}
}
class Foo {
private Semaphore sem = new Semaphore(5);
void print(int x) throws Exception {
try (Permit p = new Permit(this.sem).acquire()) {
if (x > 1000) {
throw new Exception("Too large!");
}
System.out.printf("x = %d", x);
}
}
}Pay attention to the try block and to the Closeable interface that the class Permit now implements. The object p will be “closed” when the try block exits. It may exit either at the end, or by the return or throw statements. In either case Permit.close() will be called: this is how try-with-resources works in Java.
I introduced method acquire() and moved sem.acquire() out of the Permit constructor because I believe that constructors must be code-free.
To summarize, RAII is a perfect design pattern approach when you deal with resources that may leak. Even though Java doesn’t have it out of the box we can implement it via try-with-resources and Closeable.
Please, use syntax highlighting in your comments, to make them more readable.
Object.equals() and Comparable.compareTo(). I’ve hated them for years, because, no matter how hard I try to like them, the code inside looks ugly. Now I know what exactly is wrong and how I would design this “object-to-object comparing” mechanism better.
Say we have a simple primitive class Weight, objects of which represent the weight of something in kilos:
class Weight {
private int kilos;
Weight(int k) {
this.kilos = k;
}
}Next, we want two objects of the same weight to be equal to each other:
new Weight(15).equals(new Weight(15));Here is how such a method may look:
class Weight {
private int kilos;
Weight(int k) {
this.kilos = k;
}
public boolean equals(Object obj) {
if (!(obj instanceof Weight)) {
return false;
}
Weight weight = Weight.class.cast(obj);
return weight.kilos == this.kilos;
}
}The ugly part here is, first of all, the type casting with instanceof. The second problem is that we touch the internals of the incoming object. This design makes polymorphic behavior of the Weight impossible. We simply can’t pass anything else to the equals() method, besides an instance of the class Weight. We can’t turn it into an interface and introduce multiple implementations of it:
interface Weight {
boolean equals(Object obj);
}This code will not work:
class DefaultWeight implements Weight {
// attribute and ctor skipped
public boolean equals(Object obj) {
if (!(obj instanceof Weight)) {
return false;
}
Weight weight = Weight.class.cast(obj);
return weight.kilos == this.kilos; // error here!
}
}The problem is that one object decides for the other whether they are equal. This inevitably leads to a necessity to touch private attributes in order to do the actual comparison.
What is the solution?
This is what I’m offering. Any comparison, no matter what types we are talking about, is about comparing two digital values. Either we compare a weight with a weight, text with text, or a user with a user—our CPUs can only compare numbers. Thus, we introduce a new interface Digitizable:
interface Digitizable {
byte[] digits();
}Next, we introduce a new class Comparison, which is the comparison of two streams of bytes (I’m not sure the code is perfect, I tested it here, feel free to improve and contribute with a pull request):
class Comparison<T extends Digitizable> {
private T lt;
private T rt;
Comparison(T left, T right) {
this.lt = left;
this.rt = right;
}
int value() {
final byte[] left = this.lt.digits();
final byte[] right = this.rt.digits();
int result = 0;
int max = Math.max(left.length, right.length);
for (int idx = max; idx > 0; --idx) {
byte lft = 0;
if (idx <= left.length) {
lft = left[max - idx];
}
byte rht = 0;
if (idx <= right.length) {
rht = right[max - idx];
}
result = lft - rht;
if (result != 0) {
break;
}
}
return (int) Math.signum(result);
}
}Now, we need Weight to implement Digitizable:
class Weight implements Digitizable {
private int kilos;
Weight(int k) {
this.kilos = k;
}
@Override
public byte[] digits() {
return ByteBuffer.allocate(4)
.putInt(this.kilos).array();
}
}Finally, this is how we compare them:
int v = new Comparison<Weight>(
new Weight(400), new Weight(500)
).value();This v will either be -1, 0, or 1. In this particular case it will be -1, because 400 is less than 500.
No more violation of encapsulation, no more type casting, no more ugly code inside those equals() and compareTo() methods. The class Comparison will work with all possible types. All our objects need to do in order to become comparable is to implement Digitizable and “provide” their bytes for inspection/comparison.
This approach is actually very close to the printers I described earlier.
" />Object.equals() and Comparable.compareTo(). I’ve hated them for years, because, no matter how hard I try to like them, the code inside looks ugly. Now I know what exactly is wrong and how I would design this “object-to-object comparing” mechanism better.
Say we have a simple primitive class Weight, objects of which represent the weight of something in kilos:
class Weight {
private int kilos;
Weight(int k) {
this.kilos = k;
}
}Next, we want two objects of the same weight to be equal to each other:
new Weight(15).equals(new Weight(15));Here is how such a method may look:
class Weight {
private int kilos;
Weight(int k) {
this.kilos = k;
}
public boolean equals(Object obj) {
if (!(obj instanceof Weight)) {
return false;
}
Weight weight = Weight.class.cast(obj);
return weight.kilos == this.kilos;
}
}The ugly part here is, first of all, the type casting with instanceof. The second problem is that we touch the internals of the incoming object. This design makes polymorphic behavior of the Weight impossible. We simply can’t pass anything else to the equals() method, besides an instance of the class Weight. We can’t turn it into an interface and introduce multiple implementations of it:
interface Weight {
boolean equals(Object obj);
}This code will not work:
class DefaultWeight implements Weight {
// attribute and ctor skipped
public boolean equals(Object obj) {
if (!(obj instanceof Weight)) {
return false;
}
Weight weight = Weight.class.cast(obj);
return weight.kilos == this.kilos; // error here!
}
}The problem is that one object decides for the other whether they are equal. This inevitably leads to a necessity to touch private attributes in order to do the actual comparison.
What is the solution?
This is what I’m offering. Any comparison, no matter what types we are talking about, is about comparing two digital values. Either we compare a weight with a weight, text with text, or a user with a user—our CPUs can only compare numbers. Thus, we introduce a new interface Digitizable:
interface Digitizable {
byte[] digits();
}Next, we introduce a new class Comparison, which is the comparison of two streams of bytes (I’m not sure the code is perfect, I tested it here, feel free to improve and contribute with a pull request):
class Comparison<T extends Digitizable> {
private T lt;
private T rt;
Comparison(T left, T right) {
this.lt = left;
this.rt = right;
}
int value() {
final byte[] left = this.lt.digits();
final byte[] right = this.rt.digits();
int result = 0;
int max = Math.max(left.length, right.length);
for (int idx = max; idx > 0; --idx) {
byte lft = 0;
if (idx <= left.length) {
lft = left[max - idx];
}
byte rht = 0;
if (idx <= right.length) {
rht = right[max - idx];
}
result = lft - rht;
if (result != 0) {
break;
}
}
return (int) Math.signum(result);
}
}Now, we need Weight to implement Digitizable:
class Weight implements Digitizable {
private int kilos;
Weight(int k) {
this.kilos = k;
}
@Override
public byte[] digits() {
return ByteBuffer.allocate(4)
.putInt(this.kilos).array();
}
}Finally, this is how we compare them:
int v = new Comparison<Weight>(
new Weight(400), new Weight(500)
).value();This v will either be -1, 0, or 1. In this particular case it will be -1, because 400 is less than 500.
No more violation of encapsulation, no more type casting, no more ugly code inside those equals() and compareTo() methods. The class Comparison will work with all possible types. All our objects need to do in order to become comparable is to implement Digitizable and “provide” their bytes for inspection/comparison.
This approach is actually very close to the printers I described earlier.
"/>
https://www.yegor256.com/2017/07/11/how-to-redesign-equals.html
How I Would Re-design equals()
- Copenhagen, Denmark
- Yegor Bugayenko
- comments
I want to rant a bit about Java design, in particular about the methods Object.equals() and Comparable.compareTo(). I’ve hated them for years, because, no matter how hard I try to like them, the code inside looks ugly. Now I know what exactly is wrong and how I would design this “object-to-object comparing” mechanism better.

Say we have a simple primitive class Weight, objects of which represent the weight of something in kilos:
class Weight {
private int kilos;
Weight(int k) {
this.kilos = k;
}
}Next, we want two objects of the same weight to be equal to each other:
new Weight(15).equals(new Weight(15));Here is how such a method may look:
class Weight {
private int kilos;
Weight(int k) {
this.kilos = k;
}
public boolean equals(Object obj) {
if (!(obj instanceof Weight)) {
return false;
}
Weight weight = Weight.class.cast(obj);
return weight.kilos == this.kilos;
}
}The ugly part here is, first of all, the type casting with instanceof. The second problem is that we touch the internals of the incoming object. This design makes polymorphic behavior of the Weight impossible. We simply can’t pass anything else to the equals() method, besides an instance of the class Weight. We can’t turn it into an interface and introduce multiple implementations of it:
interface Weight {
boolean equals(Object obj);
}This code will not work:
class DefaultWeight implements Weight {
// attribute and ctor skipped
public boolean equals(Object obj) {
if (!(obj instanceof Weight)) {
return false;
}
Weight weight = Weight.class.cast(obj);
return weight.kilos == this.kilos; // error here!
}
}The problem is that one object decides for the other whether they are equal. This inevitably leads to a necessity to touch private attributes in order to do the actual comparison.
What is the solution?
This is what I’m offering. Any comparison, no matter what types we are talking about, is about comparing two digital values. Either we compare a weight with a weight, text with text, or a user with a user—our CPUs can only compare numbers. Thus, we introduce a new interface Digitizable:
interface Digitizable {
byte[] digits();
}Next, we introduce a new class Comparison, which is the comparison of two streams of bytes (I’m not sure the code is perfect, I tested it here, feel free to improve and contribute with a pull request):
class Comparison<T extends Digitizable> {
private T lt;
private T rt;
Comparison(T left, T right) {
this.lt = left;
this.rt = right;
}
int value() {
final byte[] left = this.lt.digits();
final byte[] right = this.rt.digits();
int result = 0;
int max = Math.max(left.length, right.length);
for (int idx = max; idx > 0; --idx) {
byte lft = 0;
if (idx <= left.length) {
lft = left[max - idx];
}
byte rht = 0;
if (idx <= right.length) {
rht = right[max - idx];
}
result = lft - rht;
if (result != 0) {
break;
}
}
return (int) Math.signum(result);
}
}Now, we need Weight to implement Digitizable:
class Weight implements Digitizable {
private int kilos;
Weight(int k) {
this.kilos = k;
}
@Override
public byte[] digits() {
return ByteBuffer.allocate(4)
.putInt(this.kilos).array();
}
}Finally, this is how we compare them:
int v = new Comparison<Weight>(
new Weight(400), new Weight(500)
).value();This v will either be -1, 0, or 1. In this particular case it will be -1, because 400 is less than 500.
No more violation of encapsulation, no more type casting, no more ugly code inside those equals() and compareTo() methods. The class Comparison will work with all possible types. All our objects need to do in order to become comparable is to implement Digitizable and “provide” their bytes for inspection/comparison.
This approach is actually very close to the printers I described earlier.
I want to rant a bit about Java design, in particular about the methods Object.equals() and Comparable.compareTo(). I’ve hated them for years, because, no matter how hard I try to like them, the code inside looks ugly. Now I know what exactly is wrong and how I would design this “object-to-object comparing” mechanism better.

Say we have a simple primitive class Weight, objects of which represent the weight of something in kilos:
class Weight {
private int kilos;
Weight(int k) {
this.kilos = k;
}
}Next, we want two objects of the same weight to be equal to each other:
new Weight(15).equals(new Weight(15));Here is how such a method may look:
class Weight {
private int kilos;
Weight(int k) {
this.kilos = k;
}
public boolean equals(Object obj) {
if (!(obj instanceof Weight)) {
return false;
}
Weight weight = Weight.class.cast(obj);
return weight.kilos == this.kilos;
}
}The ugly part here is, first of all, the type casting with instanceof. The second problem is that we touch the internals of the incoming object. This design makes polymorphic behavior of the Weight impossible. We simply can’t pass anything else to the equals() method, besides an instance of the class Weight. We can’t turn it into an interface and introduce multiple implementations of it:
interface Weight {
boolean equals(Object obj);
}This code will not work:
class DefaultWeight implements Weight {
// attribute and ctor skipped
public boolean equals(Object obj) {
if (!(obj instanceof Weight)) {
return false;
}
Weight weight = Weight.class.cast(obj);
return weight.kilos == this.kilos; // error here!
}
}The problem is that one object decides for the other whether they are equal. This inevitably leads to a necessity to touch private attributes in order to do the actual comparison.
What is the solution?
This is what I’m offering. Any comparison, no matter what types we are talking about, is about comparing two digital values. Either we compare a weight with a weight, text with text, or a user with a user—our CPUs can only compare numbers. Thus, we introduce a new interface Digitizable:
interface Digitizable {
byte[] digits();
}Next, we introduce a new class Comparison, which is the comparison of two streams of bytes (I’m not sure the code is perfect, I tested it here, feel free to improve and contribute with a pull request):
class Comparison<T extends Digitizable> {
private T lt;
private T rt;
Comparison(T left, T right) {
this.lt = left;
this.rt = right;
}
int value() {
final byte[] left = this.lt.digits();
final byte[] right = this.rt.digits();
int result = 0;
int max = Math.max(left.length, right.length);
for (int idx = max; idx > 0; --idx) {
byte lft = 0;
if (idx <= left.length) {
lft = left[max - idx];
}
byte rht = 0;
if (idx <= right.length) {
rht = right[max - idx];
}
result = lft - rht;
if (result != 0) {
break;
}
}
return (int) Math.signum(result);
}
}Now, we need Weight to implement Digitizable:
class Weight implements Digitizable {
private int kilos;
Weight(int k) {
this.kilos = k;
}
@Override
public byte[] digits() {
return ByteBuffer.allocate(4)
.putInt(this.kilos).array();
}
}Finally, this is how we compare them:
int v = new Comparison<Weight>(
new Weight(400), new Weight(500)
).value();This v will either be -1, 0, or 1. In this particular case it will be -1, because 400 is less than 500.
No more violation of encapsulation, no more type casting, no more ugly code inside those equals() and compareTo() methods. The class Comparison will work with all possible types. All our objects need to do in order to become comparable is to implement Digitizable and “provide” their bytes for inspection/comparison.
This approach is actually very close to the printers I described earlier.
Please, use syntax highlighting in your comments, to make them more readable.
Cactoos is a library of object-oriented Java primitives we started to work on just a few weeks ago. The intent was to propose a clean and more declarative alternative to JDK, Guava, Apache Commons, and others. Instead of calling static procedures we want to use objects, the way they are supposed to be used. Let’s see how input/output works in a pure object-oriented fashion.
Disclaimer: The version I’m using at the time of writing is 0.9. Later versions may have different names of classes and a totally different design.
Let’s say you want to read a file. This is how you would do it with the static method readAllBytes() from the utility class Files in JDK7:
byte[] content = Files.readAllBytes(
new File("/tmp/photo.jpg").toPath()
);This code is very imperative—it reads the file content right here and now, placing it into the array.
This is how you do it with Cactoos:
Bytes source = new InputAsBytes(
new FileAsInput(
new File("/tmp/photo.jpg")
)
);Pay attention—there are no method calls yet. Just three constructors of three classes that compose a bigger object. The object source is of type Bytes and represents the content of the file. To get that content out of it we call its method asBytes():
bytes[] content = source.asBytes();This is the moment when the file system is touched. This approach, as you can see, is absolutely declarative and thanks to that possesses all the benefits of object orientation.
Here is another example. Say you want to write some text into a file. Here is how you do it in Cactoos. First you need the Input:
Input input = new BytesAsInput(
new TextAsBytes(
new StringAsText(
"Hello, world!"
)
)
);Then you need the Output:
Output output = new FileAsOutput(
new File("/tmp/hello.txt")
);Now, we want to copy the input to the output. There is no “copy” operation in pure OOP. Moreover, there must be no operations at all. Just objects. We have a class named TeeInput, which is an Input that copies everything you read from it to the Output, similar to what TeeInputStream from Apache Commons does, but encapsulated. So we don’t copy, we create an Input that will copy if you touch it:
Input tee = new TeeInput(input, output);Now, we have to “touch” it. And we have to touch every single byte of it, in order to make sure they all are copied. If we just read() the first byte, only one byte will be copied to the file. The best way to touch them all is to calculate the size of the tee object, going byte by byte. We have an object for it, called LengthOfInput. It encapsulates an Input and behaves like its length in bytes:
Scalar<Long> length = new LengthOfInput(tee);Then we take the value out of it and the file writing operation takes place:
long len = length.value();Thus, the entire operation of writing the string to the file will look like this:
new LengthOfInput(
new TeeInput(
new BytesAsInput(
new TextAsBytes(
new StringAsText(
"Hello, world!"
)
)
),
new FileAsOutput(
new File("/tmp/hello.txt")
)
)
).value(); // happens hereThis is its procedural alternative from JDK7:
Files.write(
new File("/tmp/hello.txt").toPath(),
"Hello, world!".getBytes()
);“Why is object-oriented better, even though it’s longer?” I hear you ask. Because it perfectly decouples concepts, while the procedural one keeps them together.
Let’s say, you are designing a class that is supposed to encrypt some text and save it to a file. Here is how you would design it the procedural way (not a real encryption, of course):
class Encoder {
private final File target;
Encoder(final File file) {
this.target = file;
}
void encode(String text) {
Files.write(
this.target,
text.replaceAll("[a-z]", "*")
);
}
}Works fine, but what will happen when you decide to extend it to also write to an OutputStream? How will you modify this class? How ugly will it look after that? That’s because the design is not object-oriented.
This is how you would do the same design, in an object-oriented way, with Cactoos:
class Encoder {
private final Output target;
Encoder(final File file) {
this(new FileAsOutput(file));
}
Encoder(final Output output) {
this.target = output;
}
void encode(String text) {
new LengthOfInput(
new TeeInput(
new BytesAsInput(
new TextAsBytes(
new StringAsText(
text.replaceAll("[a-z]", "*")
)
)
),
this.target
)
).value();
}
}What do we do with this design if we want OutputStream to be accepted? We just add one secondary constructor:
class Encoder {
Encoder(final OutputStream stream) {
this(new OutputStreamAsOutput(stream));
}
}Done. That’s how easy and elegant it is.
That’s because concepts are perfectly separated and functionality is encapsulated. In the procedural example the behavior of the object is located outside of it, in the method encode(). The file itself doesn’t know how to write, some outside procedure Files.write() knows that instead.
To the contrary, in the object-oriented design the FileAsOutput knows how to write, and nobody else does. The file writing functionality is encapsulated and this makes it possible to decorate the objects in any possible way, creating reusable and replaceable composite objects.
Do you see the beauty of OOP now?
" />Cactoos is a library of object-oriented Java primitives we started to work on just a few weeks ago. The intent was to propose a clean and more declarative alternative to JDK, Guava, Apache Commons, and others. Instead of calling static procedures we want to use objects, the way they are supposed to be used. Let’s see how input/output works in a pure object-oriented fashion.
Disclaimer: The version I’m using at the time of writing is 0.9. Later versions may have different names of classes and a totally different design.
Let’s say you want to read a file. This is how you would do it with the static method readAllBytes() from the utility class Files in JDK7:
byte[] content = Files.readAllBytes(
new File("/tmp/photo.jpg").toPath()
);This code is very imperative—it reads the file content right here and now, placing it into the array.
This is how you do it with Cactoos:
Bytes source = new InputAsBytes(
new FileAsInput(
new File("/tmp/photo.jpg")
)
);Pay attention—there are no method calls yet. Just three constructors of three classes that compose a bigger object. The object source is of type Bytes and represents the content of the file. To get that content out of it we call its method asBytes():
bytes[] content = source.asBytes();This is the moment when the file system is touched. This approach, as you can see, is absolutely declarative and thanks to that possesses all the benefits of object orientation.
Here is another example. Say you want to write some text into a file. Here is how you do it in Cactoos. First you need the Input:
Input input = new BytesAsInput(
new TextAsBytes(
new StringAsText(
"Hello, world!"
)
)
);Then you need the Output:
Output output = new FileAsOutput(
new File("/tmp/hello.txt")
);Now, we want to copy the input to the output. There is no “copy” operation in pure OOP. Moreover, there must be no operations at all. Just objects. We have a class named TeeInput, which is an Input that copies everything you read from it to the Output, similar to what TeeInputStream from Apache Commons does, but encapsulated. So we don’t copy, we create an Input that will copy if you touch it:
Input tee = new TeeInput(input, output);Now, we have to “touch” it. And we have to touch every single byte of it, in order to make sure they all are copied. If we just read() the first byte, only one byte will be copied to the file. The best way to touch them all is to calculate the size of the tee object, going byte by byte. We have an object for it, called LengthOfInput. It encapsulates an Input and behaves like its length in bytes:
Scalar<Long> length = new LengthOfInput(tee);Then we take the value out of it and the file writing operation takes place:
long len = length.value();Thus, the entire operation of writing the string to the file will look like this:
new LengthOfInput(
new TeeInput(
new BytesAsInput(
new TextAsBytes(
new StringAsText(
"Hello, world!"
)
)
),
new FileAsOutput(
new File("/tmp/hello.txt")
)
)
).value(); // happens hereThis is its procedural alternative from JDK7:
Files.write(
new File("/tmp/hello.txt").toPath(),
"Hello, world!".getBytes()
);“Why is object-oriented better, even though it’s longer?” I hear you ask. Because it perfectly decouples concepts, while the procedural one keeps them together.
Let’s say, you are designing a class that is supposed to encrypt some text and save it to a file. Here is how you would design it the procedural way (not a real encryption, of course):
class Encoder {
private final File target;
Encoder(final File file) {
this.target = file;
}
void encode(String text) {
Files.write(
this.target,
text.replaceAll("[a-z]", "*")
);
}
}Works fine, but what will happen when you decide to extend it to also write to an OutputStream? How will you modify this class? How ugly will it look after that? That’s because the design is not object-oriented.
This is how you would do the same design, in an object-oriented way, with Cactoos:
class Encoder {
private final Output target;
Encoder(final File file) {
this(new FileAsOutput(file));
}
Encoder(final Output output) {
this.target = output;
}
void encode(String text) {
new LengthOfInput(
new TeeInput(
new BytesAsInput(
new TextAsBytes(
new StringAsText(
text.replaceAll("[a-z]", "*")
)
)
),
this.target
)
).value();
}
}What do we do with this design if we want OutputStream to be accepted? We just add one secondary constructor:
class Encoder {
Encoder(final OutputStream stream) {
this(new OutputStreamAsOutput(stream));
}
}Done. That’s how easy and elegant it is.
That’s because concepts are perfectly separated and functionality is encapsulated. In the procedural example the behavior of the object is located outside of it, in the method encode(). The file itself doesn’t know how to write, some outside procedure Files.write() knows that instead.
To the contrary, in the object-oriented design the FileAsOutput knows how to write, and nobody else does. The file writing functionality is encapsulated and this makes it possible to decorate the objects in any possible way, creating reusable and replaceable composite objects.
Do you see the beauty of OOP now?
"/>
https://www.yegor256.com/2017/06/22/object-oriented-input-output-in-cactoos.html
Object-Oriented Declarative Input/Output in Cactoos
- Dnipro, Ukraine
- Yegor Bugayenko
- comments
Cactoos is a library of object-oriented Java primitives we started to work on just a few weeks ago. The intent was to propose a clean and more declarative alternative to JDK, Guava, Apache Commons, and others. Instead of calling static procedures we want to use objects, the way they are supposed to be used. Let’s see how input/output works in a pure object-oriented fashion.
Disclaimer: The version I’m using at the time of writing is 0.9. Later versions may have different names of classes and a totally different design.
Let’s say you want to read a file. This is how you would do it with the static method readAllBytes() from the utility class Files in JDK7:
byte[] content = Files.readAllBytes(
new File("/tmp/photo.jpg").toPath()
);This code is very imperative—it reads the file content right here and now, placing it into the array.
This is how you do it with Cactoos:
Bytes source = new InputAsBytes(
new FileAsInput(
new File("/tmp/photo.jpg")
)
);Pay attention—there are no method calls yet. Just three constructors of three classes that compose a bigger object. The object source is of type Bytes and represents the content of the file. To get that content out of it we call its method asBytes():
bytes[] content = source.asBytes();This is the moment when the file system is touched. This approach, as you can see, is absolutely declarative and thanks to that possesses all the benefits of object orientation.
Here is another example. Say you want to write some text into a file. Here is how you do it in Cactoos. First you need the Input:
Input input = new BytesAsInput(
new TextAsBytes(
new StringAsText(
"Hello, world!"
)
)
);Then you need the Output:
Output output = new FileAsOutput(
new File("/tmp/hello.txt")
);Now, we want to copy the input to the output. There is no “copy” operation in pure OOP. Moreover, there must be no operations at all. Just objects. We have a class named TeeInput, which is an Input that copies everything you read from it to the Output, similar to what TeeInputStream from Apache Commons does, but encapsulated. So we don’t copy, we create an Input that will copy if you touch it:
Input tee = new TeeInput(input, output);Now, we have to “touch” it. And we have to touch every single byte of it, in order to make sure they all are copied. If we just read() the first byte, only one byte will be copied to the file. The best way to touch them all is to calculate the size of the tee object, going byte by byte. We have an object for it, called LengthOfInput. It encapsulates an Input and behaves like its length in bytes:
Scalar<Long> length = new LengthOfInput(tee);Then we take the value out of it and the file writing operation takes place:
long len = length.value();Thus, the entire operation of writing the string to the file will look like this:
new LengthOfInput(
new TeeInput(
new BytesAsInput(
new TextAsBytes(
new StringAsText(
"Hello, world!"
)
)
),
new FileAsOutput(
new File("/tmp/hello.txt")
)
)
).value(); // happens hereThis is its procedural alternative from JDK7:
Files.write(
new File("/tmp/hello.txt").toPath(),
"Hello, world!".getBytes()
);“Why is object-oriented better, even though it’s longer?” I hear you ask. Because it perfectly decouples concepts, while the procedural one keeps them together.
Let’s say, you are designing a class that is supposed to encrypt some text and save it to a file. Here is how you would design it the procedural way (not a real encryption, of course):
class Encoder {
private final File target;
Encoder(final File file) {
this.target = file;
}
void encode(String text) {
Files.write(
this.target,
text.replaceAll("[a-z]", "*")
);
}
}Works fine, but what will happen when you decide to extend it to also write to an OutputStream? How will you modify this class? How ugly will it look after that? That’s because the design is not object-oriented.
This is how you would do the same design, in an object-oriented way, with Cactoos:
class Encoder {
private final Output target;
Encoder(final File file) {
this(new FileAsOutput(file));
}
Encoder(final Output output) {
this.target = output;
}
void encode(String text) {
new LengthOfInput(
new TeeInput(
new BytesAsInput(
new TextAsBytes(
new StringAsText(
text.replaceAll("[a-z]", "*")
)
)
),
this.target
)
).value();
}
}What do we do with this design if we want OutputStream to be accepted? We just add one secondary constructor:
class Encoder {
Encoder(final OutputStream stream) {
this(new OutputStreamAsOutput(stream));
}
}Done. That’s how easy and elegant it is.
That’s because concepts are perfectly separated and functionality is encapsulated. In the procedural example the behavior of the object is located outside of it, in the method encode(). The file itself doesn’t know how to write, some outside procedure Files.write() knows that instead.
To the contrary, in the object-oriented design the FileAsOutput knows how to write, and nobody else does. The file writing functionality is encapsulated and this makes it possible to decorate the objects in any possible way, creating reusable and replaceable composite objects.
Do you see the beauty of OOP now?
Cactoos is a library of object-oriented Java primitives we started to work on just a few weeks ago. The intent was to propose a clean and more declarative alternative to JDK, Guava, Apache Commons, and others. Instead of calling static procedures we want to use objects, the way they are supposed to be used. Let’s see how input/output works in a pure object-oriented fashion.
Disclaimer: The version I’m using at the time of writing is 0.9. Later versions may have different names of classes and a totally different design.
Let’s say you want to read a file. This is how you would do it with the static method readAllBytes() from the utility class Files in JDK7:
byte[] content = Files.readAllBytes(
new File("/tmp/photo.jpg").toPath()
);This code is very imperative—it reads the file content right here and now, placing it into the array.
This is how you do it with Cactoos:
Bytes source = new InputAsBytes(
new FileAsInput(
new File("/tmp/photo.jpg")
)
);Pay attention—there are no method calls yet. Just three constructors of three classes that compose a bigger object. The object source is of type Bytes and represents the content of the file. To get that content out of it we call its method asBytes():
bytes[] content = source.asBytes();This is the moment when the file system is touched. This approach, as you can see, is absolutely declarative and thanks to that possesses all the benefits of object orientation.
Here is another example. Say you want to write some text into a file. Here is how you do it in Cactoos. First you need the Input:
Input input = new BytesAsInput(
new TextAsBytes(
new StringAsText(
"Hello, world!"
)
)
);Then you need the Output:
Output output = new FileAsOutput(
new File("/tmp/hello.txt")
);Now, we want to copy the input to the output. There is no “copy” operation in pure OOP. Moreover, there must be no operations at all. Just objects. We have a class named TeeInput, which is an Input that copies everything you read from it to the Output, similar to what TeeInputStream from Apache Commons does, but encapsulated. So we don’t copy, we create an Input that will copy if you touch it:
Input tee = new TeeInput(input, output);Now, we have to “touch” it. And we have to touch every single byte of it, in order to make sure they all are copied. If we just read() the first byte, only one byte will be copied to the file. The best way to touch them all is to calculate the size of the tee object, going byte by byte. We have an object for it, called LengthOfInput. It encapsulates an Input and behaves like its length in bytes:
Scalar<Long> length = new LengthOfInput(tee);Then we take the value out of it and the file writing operation takes place:
long len = length.value();Thus, the entire operation of writing the string to the file will look like this:
new LengthOfInput(
new TeeInput(
new BytesAsInput(
new TextAsBytes(
new StringAsText(
"Hello, world!"
)
)
),
new FileAsOutput(
new File("/tmp/hello.txt")
)
)
).value(); // happens hereThis is its procedural alternative from JDK7:
Files.write(
new File("/tmp/hello.txt").toPath(),
"Hello, world!".getBytes()
);“Why is object-oriented better, even though it’s longer?” I hear you ask. Because it perfectly decouples concepts, while the procedural one keeps them together.
Let’s say, you are designing a class that is supposed to encrypt some text and save it to a file. Here is how you would design it the procedural way (not a real encryption, of course):
class Encoder {
private final File target;
Encoder(final File file) {
this.target = file;
}
void encode(String text) {
Files.write(
this.target,
text.replaceAll("[a-z]", "*")
);
}
}Works fine, but what will happen when you decide to extend it to also write to an OutputStream? How will you modify this class? How ugly will it look after that? That’s because the design is not object-oriented.
This is how you would do the same design, in an object-oriented way, with Cactoos:
class Encoder {
private final Output target;
Encoder(final File file) {
this(new FileAsOutput(file));
}
Encoder(final Output output) {
this.target = output;
}
void encode(String text) {
new LengthOfInput(
new TeeInput(
new BytesAsInput(
new TextAsBytes(
new StringAsText(
text.replaceAll("[a-z]", "*")
)
)
),
this.target
)
).value();
}
}What do we do with this design if we want OutputStream to be accepted? We just add one secondary constructor:
class Encoder {
Encoder(final OutputStream stream) {
this(new OutputStreamAsOutput(stream));
}
}Done. That’s how easy and elegant it is.
That’s because concepts are perfectly separated and functionality is encapsulated. In the procedural example the behavior of the object is located outside of it, in the method encode(). The file itself doesn’t know how to write, some outside procedure Files.write() knows that instead.
To the contrary, in the object-oriented design the FileAsOutput knows how to write, and nobody else does. The file writing functionality is encapsulated and this makes it possible to decorate the objects in any possible way, creating reusable and replaceable composite objects.
Do you see the beauty of OOP now?
Please, use syntax highlighting in your comments, to make them more readable.
final, multiple returns must be avoided, and temporal coupling between statements is evil—we can get rid of variables entirely and replace them with inline values and their monikers.
Here is the code from Section 5.10 (Algorithms) of my book Elegant Objects:
public class Main {
public static void main(String... args) {
final Secret secret = new Secret();
new Farewell(
new Attempts(
new VerboseDiff(
new Diff(
secret,
new Guess()
)
), 5
),
secret
).say();
}
}Pay attention to the variable secret. It exists here because we need its value twice: first, as a constructor argument for the Diff, second as a constructor argument for the Farewell. We can’t inline the value by creating two separate instances of class Secret, because it really has to be the same object—it encapsulates the number that we hide from the user in a number-guessing game.
There could be many other situations where a value needs to be used multiple times while remaining unmodifiable. Why do we still call these values variables if technically they are constants?
I’m suggesting we introduce “monikers” for these values, assigning them through the as keyword. For example:
public class Main {
public static void main(String... args) {
new Farewell(
new Attempts(
new VerboseDiff(
new Diff(
new Secret() as secret,
new Guess()
)
), 5
),
secret
).say();
}
}Here new Secret() is the inlined value and secret is its moniker, which we use a few lines later.
It would be great to have this feature in Java, right?
" /> must befinal, multiple returns must be avoided, and temporal coupling between statements is evil—we can get rid of variables entirely and replace them with inline values and their monikers.
Here is the code from Section 5.10 (Algorithms) of my book Elegant Objects:
public class Main {
public static void main(String... args) {
final Secret secret = new Secret();
new Farewell(
new Attempts(
new VerboseDiff(
new Diff(
secret,
new Guess()
)
), 5
),
secret
).say();
}
}Pay attention to the variable secret. It exists here because we need its value twice: first, as a constructor argument for the Diff, second as a constructor argument for the Farewell. We can’t inline the value by creating two separate instances of class Secret, because it really has to be the same object—it encapsulates the number that we hide from the user in a number-guessing game.
There could be many other situations where a value needs to be used multiple times while remaining unmodifiable. Why do we still call these values variables if technically they are constants?
I’m suggesting we introduce “monikers” for these values, assigning them through the as keyword. For example:
public class Main {
public static void main(String... args) {
new Farewell(
new Attempts(
new VerboseDiff(
new Diff(
new Secret() as secret,
new Guess()
)
), 5
),
secret
).say();
}
}Here new Secret() is the inlined value and secret is its moniker, which we use a few lines later.
It would be great to have this feature in Java, right?
"/>
https://www.yegor256.com/2017/05/16/monikers.html
Monikers Instead of Variables
- Odessa, Ukraine
- Yegor Bugayenko
- comments
If we agree that all local variables must be final, multiple returns must be avoided, and temporal coupling between statements is evil—we can get rid of variables entirely and replace them with inline values and their monikers.

Here is the code from Section 5.10 (Algorithms) of my book Elegant Objects:
public class Main {
public static void main(String... args) {
final Secret secret = new Secret();
new Farewell(
new Attempts(
new VerboseDiff(
new Diff(
secret,
new Guess()
)
), 5
),
secret
).say();
}
}Pay attention to the variable secret. It exists here because we need its value twice: first, as a constructor argument for the Diff, second as a constructor argument for the Farewell. We can’t inline the value by creating two separate instances of class Secret, because it really has to be the same object—it encapsulates the number that we hide from the user in a number-guessing game.
There could be many other situations where a value needs to be used multiple times while remaining unmodifiable. Why do we still call these values variables if technically they are constants?
I’m suggesting we introduce “monikers” for these values, assigning them through the as keyword. For example:
public class Main {
public static void main(String... args) {
new Farewell(
new Attempts(
new VerboseDiff(
new Diff(
new Secret() as secret,
new Guess()
)
), 5
),
secret
).say();
}
}Here new Secret() is the inlined value and secret is its moniker, which we use a few lines later.
It would be great to have this feature in Java, right?
If we agree that all local variables must be final, multiple returns must be avoided, and temporal coupling between statements is evil—we can get rid of variables entirely and replace them with inline values and their monikers.

Here is the code from Section 5.10 (Algorithms) of my book Elegant Objects:
public class Main {
public static void main(String... args) {
final Secret secret = new Secret();
new Farewell(
new Attempts(
new VerboseDiff(
new Diff(
secret,
new Guess()
)
), 5
),
secret
).say();
}
}Pay attention to the variable secret. It exists here because we need its value twice: first, as a constructor argument for the Diff, second as a constructor argument for the Farewell. We can’t inline the value by creating two separate instances of class Secret, because it really has to be the same object—it encapsulates the number that we hide from the user in a number-guessing game.
There could be many other situations where a value needs to be used multiple times while remaining unmodifiable. Why do we still call these values variables if technically they are constants?
I’m suggesting we introduce “monikers” for these values, assigning them through the as keyword. For example:
public class Main {
public static void main(String... args) {
new Farewell(
new Attempts(
new VerboseDiff(
new Diff(
new Secret() as secret,
new Guess()
)
), 5
),
secret
).say();
}
}Here new Secret() is the inlined value and secret is its moniker, which we use a few lines later.
It would be great to have this feature in Java, right?
Please, use syntax highlighting in your comments, to make them more readable.

Look at this code:
print(book.title());It is very straight forward: we retrieve the title from the book and simply give it to the print() procedure, or whatever else it might be. We are in charge, the control is in our hands.
In contrast to this, here is the inversion:
print(book);We give the entire book to the procedure print() and it calls title() when it feels like it. That is, we delegate control.
This is pretty much everything you need to know about IoC.
Does it have anything to do with dependency injection containers? Well, of course, we could put the book into a container, inject the entire container into print(), let it retrieve the book from the container and then call title(). But that’s not what IoC is really about—it’s merely one of its perverted usage scenarios.
The main point of IoC is exactly the same as I was proposing in my previous posts about naked data and object friends: we must not deal with data, but instead only with object composition. In the given example the design would be even better if we got rid of the print() procedure altogether and replaced it with an object:
new PrintedBook(book);That would be pure object composition.
There is not much more to say on this subject; I hope I have cleared it up for you—it is just as simple as that.
" /> IoC seems to have become the cornerstone concept of many frameworks and object-oriented designs since it was described by Martin Fowler, Robert Martin and others ten years ago. Despite its popularity IoC is misunderstood and overcomplicated all too often.
Look at this code:
print(book.title());It is very straight forward: we retrieve the title from the book and simply give it to the print() procedure, or whatever else it might be. We are in charge, the control is in our hands.
In contrast to this, here is the inversion:
print(book);We give the entire book to the procedure print() and it calls title() when it feels like it. That is, we delegate control.
This is pretty much everything you need to know about IoC.
Does it have anything to do with dependency injection containers? Well, of course, we could put the book into a container, inject the entire container into print(), let it retrieve the book from the container and then call title(). But that’s not what IoC is really about—it’s merely one of its perverted usage scenarios.
The main point of IoC is exactly the same as I was proposing in my previous posts about naked data and object friends: we must not deal with data, but instead only with object composition. In the given example the design would be even better if we got rid of the print() procedure altogether and replaced it with an object:
new PrintedBook(book);That would be pure object composition.
There is not much more to say on this subject; I hope I have cleared it up for you—it is just as simple as that.
"/>
https://www.yegor256.com/2017/05/10/inversion-of-control.html
How Does Inversion of Control Really Work?
- Odessa, Ukraine
- Yegor Bugayenko
- comments
IoC seems to have become the cornerstone concept of many frameworks and object-oriented designs since it was described by Martin Fowler, Robert Martin and others ten years ago. Despite its popularity IoC is misunderstood and overcomplicated all too often.

Look at this code:
print(book.title());It is very straight forward: we retrieve the title from the book and simply give it to the print() procedure, or whatever else it might be. We are in charge, the control is in our hands.
In contrast to this, here is the inversion:
print(book);We give the entire book to the procedure print() and it calls title() when it feels like it. That is, we delegate control.
This is pretty much everything you need to know about IoC.
Does it have anything to do with dependency injection containers? Well, of course, we could put the book into a container, inject the entire container into print(), let it retrieve the book from the container and then call title(). But that’s not what IoC is really about—it’s merely one of its perverted usage scenarios.
The main point of IoC is exactly the same as I was proposing in my previous posts about naked data and object friends: we must not deal with data, but instead only with object composition. In the given example the design would be even better if we got rid of the print() procedure altogether and replaced it with an object:
new PrintedBook(book);That would be pure object composition.
There is not much more to say on this subject; I hope I have cleared it up for you—it is just as simple as that.
IoC seems to have become the cornerstone concept of many frameworks and object-oriented designs since it was described by Martin Fowler, Robert Martin and others ten years ago. Despite its popularity IoC is misunderstood and overcomplicated all too often.

Look at this code:
print(book.title());It is very straight forward: we retrieve the title from the book and simply give it to the print() procedure, or whatever else it might be. We are in charge, the control is in our hands.
In contrast to this, here is the inversion:
print(book);We give the entire book to the procedure print() and it calls title() when it feels like it. That is, we delegate control.
This is pretty much everything you need to know about IoC.
Does it have anything to do with dependency injection containers? Well, of course, we could put the book into a container, inject the entire container into print(), let it retrieve the book from the container and then call title(). But that’s not what IoC is really about—it’s merely one of its perverted usage scenarios.
The main point of IoC is exactly the same as I was proposing in my previous posts about naked data and object friends: we must not deal with data, but instead only with object composition. In the given example the design would be even better if we got rid of the print() procedure altogether and replaced it with an object:
new PrintedBook(book);That would be pure object composition.
There is not much more to say on this subject; I hope I have cleared it up for you—it is just as simple as that.
Please, use syntax highlighting in your comments, to make them more readable.

Let’s go one by one and see how they “help.”
S
The “S” refers to the Single Responsibility Principle, which, according to Clean Code by Robert Martin, means that “a class should have only one reason to change.”
This statement sounds extremely vague to me, but the book explains it, stating that objects must be problem-centered and responsible for “one thing.” It’s up to us to decide what that one thing is, of course.
This is what we know as “high cohesion” since Larry Constantine wrote about it in the IBM Systems Journal in 1974. Why was it necessary to create a new principle 15 years later with an ambiguous name and a very questionable definition?
O
This letter is about the Open/Close Principle, which was introduced by Bertrand Meyer in Object Oriented Software Construction in 1988. Simply put, it means that an object should not be modifiable. I can’t agree more with this.
But then it says it should be extendable, literally through implementation inheritance, which is known as an anti-OOP technology. Thus, this principle is not really applicable to objects and OOP. It may work with modules and services, but not with objects.
L
The third letter is for the Liskov Substitution Principle, which was introduced by Barbara Liskov in 1987. This one is the most innocent part in the SOLID pentad. In simple words, it states that if your method expects a Collection, an ArrayList will work.
It is also known as subtyping and is the foundational component of any object-oriented language. Why do we need to call it a principle and “follow” it? Is it at all possible to create any object-oriented software without subtyping? If this one is a principle, let’s add “variables” and “method calling” here too.
Honestly, I suspect that this principle was added to SOLID mostly in order to somehow fill the gap between “SO” and “ID.”
I and D
I guess they both were introduced by Robert Martin while he was working at Xerox.
The Interface Segregation Principle states that you must not declare List x if you only need Collection x or even Iterable x. I can’t agree more. Let’s see the next one.
The Dependency Inversion Principle means that instead of ArrayList x, you must declare List x and let the provider of the object decide whether it is ArrayList or LinkedList. This one also sounds reasonable to me.
However, how is all this different from the good old “loose coupling” introduced together with cohesion by Constantine in 1974? Do we really need to simplify and blur in order to learn better? No, not to learn better, but to sell better. Here goes my point.
My point is…
The point being these principles are nothing but an explanation of “cohesion and coupling” for dummies in a very primitive, ambiguous, and marketable way. Dummies will buy books, seminars, and trainings, but won’t really be able to understand the logic behind them. Do they really need to? They are just monkeys coders, right?
“But an object must be responsible for one thing!” is what I often hear at conferences. People learn that mantra without even knowing what cohesion is nor understanding what this “one thing” they are praying for really is. There is no such thing as “one thing,” guys! There are different levels of cohesion.
Who is guilty? Uncle Bob & Co.
They are no better than Ridley Scott and other Hollywood money makers who deliver primitive and easy-to-cry-at movies just to generate a profit. People are getting dumber by watching—but this is not of their concern. The same happens with magic OOP principles—programmers rely on them, thinking the truth is right there while the real truth is not understood even by the creators of this “magic.”
SOLID is a money-making instrument, not an instrument to make code better.
" /> SOLID acronym. It stands for five principles of object-oriented programming that, if followed, are supposed to make your code both legible and extensible. They were introduced almost 30 years ago, but have they really made us better programmers in the time since? Do we really understand OOP better thanks to them? Do we write more “legible and extensible” code? I don’t think so.
Let’s go one by one and see how they “help.”
S
The “S” refers to the Single Responsibility Principle, which, according to Clean Code by Robert Martin, means that “a class should have only one reason to change.”
This statement sounds extremely vague to me, but the book explains it, stating that objects must be problem-centered and responsible for “one thing.” It’s up to us to decide what that one thing is, of course.
This is what we know as “high cohesion” since Larry Constantine wrote about it in the IBM Systems Journal in 1974. Why was it necessary to create a new principle 15 years later with an ambiguous name and a very questionable definition?
O
This letter is about the Open/Close Principle, which was introduced by Bertrand Meyer in Object Oriented Software Construction in 1988. Simply put, it means that an object should not be modifiable. I can’t agree more with this.
But then it says it should be extendable, literally through implementation inheritance, which is known as an anti-OOP technology. Thus, this principle is not really applicable to objects and OOP. It may work with modules and services, but not with objects.
L
The third letter is for the Liskov Substitution Principle, which was introduced by Barbara Liskov in 1987. This one is the most innocent part in the SOLID pentad. In simple words, it states that if your method expects a Collection, an ArrayList will work.
It is also known as subtyping and is the foundational component of any object-oriented language. Why do we need to call it a principle and “follow” it? Is it at all possible to create any object-oriented software without subtyping? If this one is a principle, let’s add “variables” and “method calling” here too.
Honestly, I suspect that this principle was added to SOLID mostly in order to somehow fill the gap between “SO” and “ID.”
I and D
I guess they both were introduced by Robert Martin while he was working at Xerox.
The Interface Segregation Principle states that you must not declare List x if you only need Collection x or even Iterable x. I can’t agree more. Let’s see the next one.
The Dependency Inversion Principle means that instead of ArrayList x, you must declare List x and let the provider of the object decide whether it is ArrayList or LinkedList. This one also sounds reasonable to me.
However, how is all this different from the good old “loose coupling” introduced together with cohesion by Constantine in 1974? Do we really need to simplify and blur in order to learn better? No, not to learn better, but to sell better. Here goes my point.
My point is…
The point being these principles are nothing but an explanation of “cohesion and coupling” for dummies in a very primitive, ambiguous, and marketable way. Dummies will buy books, seminars, and trainings, but won’t really be able to understand the logic behind them. Do they really need to? They are just monkeys coders, right?
“But an object must be responsible for one thing!” is what I often hear at conferences. People learn that mantra without even knowing what cohesion is nor understanding what this “one thing” they are praying for really is. There is no such thing as “one thing,” guys! There are different levels of cohesion.
Who is guilty? Uncle Bob & Co.
They are no better than Ridley Scott and other Hollywood money makers who deliver primitive and easy-to-cry-at movies just to generate a profit. People are getting dumber by watching—but this is not of their concern. The same happens with magic OOP principles—programmers rely on them, thinking the truth is right there while the real truth is not understood even by the creators of this “magic.”
SOLID is a money-making instrument, not an instrument to make code better.
"/>
https://www.yegor256.com/2017/03/28/solid.html
SOLID Is OOP for Dummies
- Kharkiv, Ukraine
- Yegor Bugayenko
- comments
- Discussed at:
- dzone
You definitely know the SOLID acronym. It stands for five principles of object-oriented programming that, if followed, are supposed to make your code both legible and extensible. They were introduced almost 30 years ago, but have they really made us better programmers in the time since? Do we really understand OOP better thanks to them? Do we write more “legible and extensible” code? I don’t think so.

Let’s go one by one and see how they “help.”
S
The “S” refers to the Single Responsibility Principle, which, according to Clean Code by Robert Martin, means that “a class should have only one reason to change.”
This statement sounds extremely vague to me, but the book explains it, stating that objects must be problem-centered and responsible for “one thing.” It’s up to us to decide what that one thing is, of course.
This is what we know as “high cohesion” since Larry Constantine wrote about it in the IBM Systems Journal in 1974. Why was it necessary to create a new principle 15 years later with an ambiguous name and a very questionable definition?
O
This letter is about the Open/Close Principle, which was introduced by Bertrand Meyer in Object Oriented Software Construction in 1988. Simply put, it means that an object should not be modifiable. I can’t agree more with this.
But then it says it should be extendable, literally through implementation inheritance, which is known as an anti-OOP technology. Thus, this principle is not really applicable to objects and OOP. It may work with modules and services, but not with objects.
L
The third letter is for the Liskov Substitution Principle, which was introduced by Barbara Liskov in 1987. This one is the most innocent part in the SOLID pentad. In simple words, it states that if your method expects a Collection, an ArrayList will work.
It is also known as subtyping and is the foundational component of any object-oriented language. Why do we need to call it a principle and “follow” it? Is it at all possible to create any object-oriented software without subtyping? If this one is a principle, let’s add “variables” and “method calling” here too.
Honestly, I suspect that this principle was added to SOLID mostly in order to somehow fill the gap between “SO” and “ID.”
I and D
I guess they both were introduced by Robert Martin while he was working at Xerox.
The Interface Segregation Principle states that you must not declare List x if you only need Collection x or even Iterable x. I can’t agree more. Let’s see the next one.
The Dependency Inversion Principle means that instead of ArrayList x, you must declare List x and let the provider of the object decide whether it is ArrayList or LinkedList. This one also sounds reasonable to me.
However, how is all this different from the good old “loose coupling” introduced together with cohesion by Constantine in 1974? Do we really need to simplify and blur in order to learn better? No, not to learn better, but to sell better. Here goes my point.
My point is…
The point being these principles are nothing but an explanation of “cohesion and coupling” for dummies in a very primitive, ambiguous, and marketable way. Dummies will buy books, seminars, and trainings, but won’t really be able to understand the logic behind them. Do they really need to? They are just monkeys coders, right?
“But an object must be responsible for one thing!” is what I often hear at conferences. People learn that mantra without even knowing what cohesion is nor understanding what this “one thing” they are praying for really is. There is no such thing as “one thing,” guys! There are different levels of cohesion.
Who is guilty? Uncle Bob & Co.
They are no better than Ridley Scott and other Hollywood money makers who deliver primitive and easy-to-cry-at movies just to generate a profit. People are getting dumber by watching—but this is not of their concern. The same happens with magic OOP principles—programmers rely on them, thinking the truth is right there while the real truth is not understood even by the creators of this “magic.”
SOLID is a money-making instrument, not an instrument to make code better.
You definitely know the SOLID acronym. It stands for five principles of object-oriented programming that, if followed, are supposed to make your code both legible and extensible. They were introduced almost 30 years ago, but have they really made us better programmers in the time since? Do we really understand OOP better thanks to them? Do we write more “legible and extensible” code? I don’t think so.

Let’s go one by one and see how they “help.”
S
The “S” refers to the Single Responsibility Principle, which, according to Clean Code by Robert Martin, means that “a class should have only one reason to change.”
This statement sounds extremely vague to me, but the book explains it, stating that objects must be problem-centered and responsible for “one thing.” It’s up to us to decide what that one thing is, of course.
This is what we know as “high cohesion” since Larry Constantine wrote about it in the IBM Systems Journal in 1974. Why was it necessary to create a new principle 15 years later with an ambiguous name and a very questionable definition?
O
This letter is about the Open/Close Principle, which was introduced by Bertrand Meyer in Object Oriented Software Construction in 1988. Simply put, it means that an object should not be modifiable. I can’t agree more with this.
But then it says it should be extendable, literally through implementation inheritance, which is known as an anti-OOP technology. Thus, this principle is not really applicable to objects and OOP. It may work with modules and services, but not with objects.
L
The third letter is for the Liskov Substitution Principle, which was introduced by Barbara Liskov in 1987. This one is the most innocent part in the SOLID pentad. In simple words, it states that if your method expects a Collection, an ArrayList will work.
It is also known as subtyping and is the foundational component of any object-oriented language. Why do we need to call it a principle and “follow” it? Is it at all possible to create any object-oriented software without subtyping? If this one is a principle, let’s add “variables” and “method calling” here too.
Honestly, I suspect that this principle was added to SOLID mostly in order to somehow fill the gap between “SO” and “ID.”
I and D
I guess they both were introduced by Robert Martin while he was working at Xerox.
The Interface Segregation Principle states that you must not declare List x if you only need Collection x or even Iterable x. I can’t agree more. Let’s see the next one.
The Dependency Inversion Principle means that instead of ArrayList x, you must declare List x and let the provider of the object decide whether it is ArrayList or LinkedList. This one also sounds reasonable to me.
However, how is all this different from the good old “loose coupling” introduced together with cohesion by Constantine in 1974? Do we really need to simplify and blur in order to learn better? No, not to learn better, but to sell better. Here goes my point.
My point is…
The point being these principles are nothing but an explanation of “cohesion and coupling” for dummies in a very primitive, ambiguous, and marketable way. Dummies will buy books, seminars, and trainings, but won’t really be able to understand the logic behind them. Do they really need to? They are just monkeys coders, right?
“But an object must be responsible for one thing!” is what I often hear at conferences. People learn that mantra without even knowing what cohesion is nor understanding what this “one thing” they are praying for really is. There is no such thing as “one thing,” guys! There are different levels of cohesion.
Who is guilty? Uncle Bob & Co.
They are no better than Ridley Scott and other Hollywood money makers who deliver primitive and easy-to-cry-at movies just to generate a profit. People are getting dumber by watching—but this is not of their concern. The same happens with magic OOP principles—programmers rely on them, thinking the truth is right there while the real truth is not understood even by the creators of this “magic.”
SOLID is a money-making instrument, not an instrument to make code better.
Please, use syntax highlighting in your comments, to make them more readable.

First, here’s how they work in a nutshell. Let’s use Ruby modules as a sample implementation. Say that we have a class Book:
class Book
def initialize(title)
@title = title
end
endNow, we want class Book to use a static method (a procedure) that does something useful. We may either define it in a utility class and let Book call it:
class TextUtils
def self.caps(text)
text.split.map(&:capitalize).join(' ')
end
end
class Book
def print
puts "My title is #{TextUtils.caps(@title)}"
end
endOr we may make it even more “convenient” and extend our module in order to access its methods directly:
module TextModule
def caps(text)
text.split.map(&:capitalize).join(' ')
end
end
class Book
extend TextModule
def print
puts "My title is #{caps(@title)}"
end
endIt seems nice—if you don’t understand the difference between object-oriented programming and static methods. Moreover, if we forget OOP purity for a minute, this approach actually looks less readable to me, even though it has fewer characters; it’s difficult to understand where the method caps() is coming from when it’s called just like #{caps(@title)} instead of #{TextUtils.caps(@title)}. Don’t you think?
Mixins start to play their role better when we include them. We can combine them to construct the behavior of the class we’re looking for. Let’s create two mixins. The first one will be called PlainMixin and will print the title of the book the way it is, and the second one will be called CapsMixin and will capitalize what’s already printed:
module CapsMixin
def to_s
super.to_s.split.map(&:capitalize).join(' ')
end
end
module PlainMixin
def to_s
@title
end
end
class Book
def initialize(title)
@title = title
end
include CapsMixin, PlainMixin
def print
puts "My title is #{self}"
end
endCalling Book without the included mixin will print its title the way it is. Once we add the include statement, the behavior of to_s is overridden and method print produces a different result. We can combine mixins to produce the required functionality. For example, we can add one more, which will abbreviate the title to 16 characters:
module AbbrMixin
def to_s
super.to_s.gsub(/^(.{16,}?).*$/m,'\1...')
end
end
class Book
def initialize(title)
@title = title
end
include AbbrMixin, CapsMixin, PlainMixin
def print
puts "My title is #{self}"
end
endI’m sure you already understand that they both have access to the private attribute @title of class Book. They actually have full access to everything in the class. They literally are “pieces of code” that we inject into the class to make it more powerful and complex. What’s wrong with this approach?
It’s the same issue as with annotations, DTOs, getters, and utility classes—they tear objects apart and place pieces of functionality in places where objects don’t see them.
In the case of mixins, the functionality is in the Ruby modules, which make assumptions about the internal structure of Book and further assume that the programmer will still understand what’s in Book after the internal structure changes. Such assumptions completely violate the very idea of encapsulation.
Such a tight coupling between mixins and object private structure leads to nothing but unmaintainable and difficult to understand code.
The very obvious alternatives to mixins are composable decorators. Take a look at the example given in the article:
Text text = new AllCapsText(
new TrimmedText(
new PrintableText(
new TextInFile(new File("/tmp/a.txt"))
)
)
);Doesn’t it look very similar to what we were doing above with Ruby mixins?
However, unlike mixins, decorators leave objects small and cohesive, layering extra functionality on top of them. Mixins do the opposite—they make objects more complex and, thanks to that, less readable and maintainable.
I honestly believe they are just poison. Whoever invented them was a long ways from understanding the philosophy of object-oriented design.
" /> Object Thinking book. These features have different names, but the most common ones are traits and mixins. I seriously can’t understand how we can still call programming object-oriented when it has these features.
First, here’s how they work in a nutshell. Let’s use Ruby modules as a sample implementation. Say that we have a class Book:
class Book
def initialize(title)
@title = title
end
endNow, we want class Book to use a static method (a procedure) that does something useful. We may either define it in a utility class and let Book call it:
class TextUtils
def self.caps(text)
text.split.map(&:capitalize).join(' ')
end
end
class Book
def print
puts "My title is #{TextUtils.caps(@title)}"
end
endOr we may make it even more “convenient” and extend our module in order to access its methods directly:
module TextModule
def caps(text)
text.split.map(&:capitalize).join(' ')
end
end
class Book
extend TextModule
def print
puts "My title is #{caps(@title)}"
end
endIt seems nice—if you don’t understand the difference between object-oriented programming and static methods. Moreover, if we forget OOP purity for a minute, this approach actually looks less readable to me, even though it has fewer characters; it’s difficult to understand where the method caps() is coming from when it’s called just like #{caps(@title)} instead of #{TextUtils.caps(@title)}. Don’t you think?
Mixins start to play their role better when we include them. We can combine them to construct the behavior of the class we’re looking for. Let’s create two mixins. The first one will be called PlainMixin and will print the title of the book the way it is, and the second one will be called CapsMixin and will capitalize what’s already printed:
module CapsMixin
def to_s
super.to_s.split.map(&:capitalize).join(' ')
end
end
module PlainMixin
def to_s
@title
end
end
class Book
def initialize(title)
@title = title
end
include CapsMixin, PlainMixin
def print
puts "My title is #{self}"
end
endCalling Book without the included mixin will print its title the way it is. Once we add the include statement, the behavior of to_s is overridden and method print produces a different result. We can combine mixins to produce the required functionality. For example, we can add one more, which will abbreviate the title to 16 characters:
module AbbrMixin
def to_s
super.to_s.gsub(/^(.{16,}?).*$/m,'\1...')
end
end
class Book
def initialize(title)
@title = title
end
include AbbrMixin, CapsMixin, PlainMixin
def print
puts "My title is #{self}"
end
endI’m sure you already understand that they both have access to the private attribute @title of class Book. They actually have full access to everything in the class. They literally are “pieces of code” that we inject into the class to make it more powerful and complex. What’s wrong with this approach?
It’s the same issue as with annotations, DTOs, getters, and utility classes—they tear objects apart and place pieces of functionality in places where objects don’t see them.
In the case of mixins, the functionality is in the Ruby modules, which make assumptions about the internal structure of Book and further assume that the programmer will still understand what’s in Book after the internal structure changes. Such assumptions completely violate the very idea of encapsulation.
Such a tight coupling between mixins and object private structure leads to nothing but unmaintainable and difficult to understand code.
The very obvious alternatives to mixins are composable decorators. Take a look at the example given in the article:
Text text = new AllCapsText(
new TrimmedText(
new PrintableText(
new TextInFile(new File("/tmp/a.txt"))
)
)
);Doesn’t it look very similar to what we were doing above with Ruby mixins?
However, unlike mixins, decorators leave objects small and cohesive, layering extra functionality on top of them. Mixins do the opposite—they make objects more complex and, thanks to that, less readable and maintainable.
I honestly believe they are just poison. Whoever invented them was a long ways from understanding the philosophy of object-oriented design.
"/>
https://www.yegor256.com/2017/03/07/traits-and-mixins.html
Traits and Mixins Are Not OOP
- Odessa, Ukraine
- Yegor Bugayenko
- comments
Let me say right off the bat that the features we will discuss here are pure poison brought to object-oriented programming by those who desperately needed a lobotomy, just like David West suggested in his Object Thinking book. These features have different names, but the most common ones are traits and mixins. I seriously can’t understand how we can still call programming object-oriented when it has these features.

First, here’s how they work in a nutshell. Let’s use Ruby modules as a sample implementation. Say that we have a class Book:
class Book
def initialize(title)
@title = title
end
endNow, we want class Book to use a static method (a procedure) that does something useful. We may either define it in a utility class and let Book call it:
class TextUtils
def self.caps(text)
text.split.map(&:capitalize).join(' ')
end
end
class Book
def print
puts "My title is #{TextUtils.caps(@title)}"
end
endOr we may make it even more “convenient” and extend our module in order to access its methods directly:
module TextModule
def caps(text)
text.split.map(&:capitalize).join(' ')
end
end
class Book
extend TextModule
def print
puts "My title is #{caps(@title)}"
end
endIt seems nice—if you don’t understand the difference between object-oriented programming and static methods. Moreover, if we forget OOP purity for a minute, this approach actually looks less readable to me, even though it has fewer characters; it’s difficult to understand where the method caps() is coming from when it’s called just like #{caps(@title)} instead of #{TextUtils.caps(@title)}. Don’t you think?
Mixins start to play their role better when we include them. We can combine them to construct the behavior of the class we’re looking for. Let’s create two mixins. The first one will be called PlainMixin and will print the title of the book the way it is, and the second one will be called CapsMixin and will capitalize what’s already printed:
module CapsMixin
def to_s
super.to_s.split.map(&:capitalize).join(' ')
end
end
module PlainMixin
def to_s
@title
end
end
class Book
def initialize(title)
@title = title
end
include CapsMixin, PlainMixin
def print
puts "My title is #{self}"
end
endCalling Book without the included mixin will print its title the way it is. Once we add the include statement, the behavior of to_s is overridden and method print produces a different result. We can combine mixins to produce the required functionality. For example, we can add one more, which will abbreviate the title to 16 characters:
module AbbrMixin
def to_s
super.to_s.gsub(/^(.{16,}?).*$/m,'\1...')
end
end
class Book
def initialize(title)
@title = title
end
include AbbrMixin, CapsMixin, PlainMixin
def print
puts "My title is #{self}"
end
endI’m sure you already understand that they both have access to the private attribute @title of class Book. They actually have full access to everything in the class. They literally are “pieces of code” that we inject into the class to make it more powerful and complex. What’s wrong with this approach?
It’s the same issue as with annotations, DTOs, getters, and utility classes—they tear objects apart and place pieces of functionality in places where objects don’t see them.
In the case of mixins, the functionality is in the Ruby modules, which make assumptions about the internal structure of Book and further assume that the programmer will still understand what’s in Book after the internal structure changes. Such assumptions completely violate the very idea of encapsulation.
Such a tight coupling between mixins and object private structure leads to nothing but unmaintainable and difficult to understand code.
The very obvious alternatives to mixins are composable decorators. Take a look at the example given in the article:
Text text = new AllCapsText(
new TrimmedText(
new PrintableText(
new TextInFile(new File("/tmp/a.txt"))
)
)
);Doesn’t it look very similar to what we were doing above with Ruby mixins?
However, unlike mixins, decorators leave objects small and cohesive, layering extra functionality on top of them. Mixins do the opposite—they make objects more complex and, thanks to that, less readable and maintainable.
I honestly believe they are just poison. Whoever invented them was a long ways from understanding the philosophy of object-oriented design.
Let me say right off the bat that the features we will discuss here are pure poison brought to object-oriented programming by those who desperately needed a lobotomy, just like David West suggested in his Object Thinking book. These features have different names, but the most common ones are traits and mixins. I seriously can’t understand how we can still call programming object-oriented when it has these features.

First, here’s how they work in a nutshell. Let’s use Ruby modules as a sample implementation. Say that we have a class Book:
class Book
def initialize(title)
@title = title
end
endNow, we want class Book to use a static method (a procedure) that does something useful. We may either define it in a utility class and let Book call it:
class TextUtils
def self.caps(text)
text.split.map(&:capitalize).join(' ')
end
end
class Book
def print
puts "My title is #{TextUtils.caps(@title)}"
end
endOr we may make it even more “convenient” and extend our module in order to access its methods directly:
module TextModule
def caps(text)
text.split.map(&:capitalize).join(' ')
end
end
class Book
extend TextModule
def print
puts "My title is #{caps(@title)}"
end
endIt seems nice—if you don’t understand the difference between object-oriented programming and static methods. Moreover, if we forget OOP purity for a minute, this approach actually looks less readable to me, even though it has fewer characters; it’s difficult to understand where the method caps() is coming from when it’s called just like #{caps(@title)} instead of #{TextUtils.caps(@title)}. Don’t you think?
Mixins start to play their role better when we include them. We can combine them to construct the behavior of the class we’re looking for. Let’s create two mixins. The first one will be called PlainMixin and will print the title of the book the way it is, and the second one will be called CapsMixin and will capitalize what’s already printed:
module CapsMixin
def to_s
super.to_s.split.map(&:capitalize).join(' ')
end
end
module PlainMixin
def to_s
@title
end
end
class Book
def initialize(title)
@title = title
end
include CapsMixin, PlainMixin
def print
puts "My title is #{self}"
end
endCalling Book without the included mixin will print its title the way it is. Once we add the include statement, the behavior of to_s is overridden and method print produces a different result. We can combine mixins to produce the required functionality. For example, we can add one more, which will abbreviate the title to 16 characters:
module AbbrMixin
def to_s
super.to_s.gsub(/^(.{16,}?).*$/m,'\1...')
end
end
class Book
def initialize(title)
@title = title
end
include AbbrMixin, CapsMixin, PlainMixin
def print
puts "My title is #{self}"
end
endI’m sure you already understand that they both have access to the private attribute @title of class Book. They actually have full access to everything in the class. They literally are “pieces of code” that we inject into the class to make it more powerful and complex. What’s wrong with this approach?
It’s the same issue as with annotations, DTOs, getters, and utility classes—they tear objects apart and place pieces of functionality in places where objects don’t see them.
In the case of mixins, the functionality is in the Ruby modules, which make assumptions about the internal structure of Book and further assume that the programmer will still understand what’s in Book after the internal structure changes. Such assumptions completely violate the very idea of encapsulation.
Such a tight coupling between mixins and object private structure leads to nothing but unmaintainable and difficult to understand code.
The very obvious alternatives to mixins are composable decorators. Take a look at the example given in the article:
Text text = new AllCapsText(
new TrimmedText(
new PrintableText(
new TextInFile(new File("/tmp/a.txt"))
)
)
);Doesn’t it look very similar to what we were doing above with Ruby mixins?
However, unlike mixins, decorators leave objects small and cohesive, layering extra functionality on top of them. Mixins do the opposite—they make objects more complex and, thanks to that, less readable and maintainable.
I honestly believe they are just poison. Whoever invented them was a long ways from understanding the philosophy of object-oriented design.
Please, use syntax highlighting in your comments, to make them more readable.

There were a number of “rules” previously mentioned that, if applied, would obviously lead to a large number of classes, including: a) all public methods must be declared in interfaces; b) objects must not have more than four attributes (Section 2.1 of Elegant Objects); c) static methods are not allowed; d) constructors must be code-free; e) objects must expose fewer than five public methods (Section 3.1 of Elegant Objects).
The biggest concern, of course, is maintainability: “If, instead of 50 longer classes, we had 300 shorter ones, then the code would be way less readable.” This will most certainly happen if you design them wrong.
Types (or classes) in OOP constitute your vocabulary, which explains the world around your code—the world your code lives in. The richer the vocabulary, the more powerful your code. The more types you have, the better you can understand and explain the world.
If your vocabulary is big enough, you will say something like:
Read the book that is on the table.
With a much smaller vocabulary, the same phrase would sound like:
Do it with the thing that is on that thing.
Obviously, it’s easier to read and understand the first phrase. The same occurs with types in OOP: the more of them you have at your disposal, the more expressive, bright, and readable your code is.
Unfortunately, Java and many other languages are not designed with this concept in mind. Packages, modules, and namespaces don’t really help, and we usually end up with names like AbstractCookieValueMethodArgumentResolver (Spring) or CombineFileRecordReaderWrapper (Hadoop). We’re trying to pack as many semantics into class names as possible so their users won’t doubt for a second. Then we’re trying to put as many methods into one class as possible to make life easier for users; they will use their IDE hints to find the right one.
This is anything but OOP.
If your code is object-oriented, your classes must be small, their names must be nouns, and their method names must be just one word. Here is what I do in my code to make that happen:
Interfaces are nouns. For example, Request, Directive, or Domain. There are no exceptions. Types (also known as interfaces in Java) are the core part of my vocabulary; they have to be nouns.
Classes are prefixed. My classes always implement interfaces. Thanks to that, I can say they always are requests, directives, or domains. And I always want their users to remember that. Prefixes help. For example, RqBuffered is a buffered request, RqSimple is a simple request, RqLive is a request that represents a “live” HTTP connection, and RqWithHeader is a request with an extra header.
An alternative approach is to use the type name as the central part of the class name and add a prefix that explains implementation details. For example, DyDomain is a domain that persists its data in DynamoDB. Once you know what that Dy prefix is for, you can easily understand what DyUser and DyBase are about.
In a medium-sized application or a library, there will be as many as 10 to 15 prefixes you will have to remember, no more. For example, in the Takes Framework, there are 24,000 lines of code, 410 Java files, and 10 prefixes: Bc, Cc, Tk, Rq, Rs, Fb, Fk, Hm, Ps, and Xe. Not so difficult to remember what they mean, right?
Among all 240 classes, the longest name is RqWithDefaultHeader.
I find this approach to class naming rather convenient. I used it in these open source projects (in GitHub): yegor256/takes (10 prefixes), yegor256/jare (5 prefixes), yegor256/rultor (6 prefixes), and yegor256/wring (5 prefixes).
" /> every presentation in which I explain my view of object-oriented programming, there is someone who shares a comment like this: “If we follow your advice, we will have so many small classes.” And my answer is always the same: “Of course we will, and that’s great!” I honestly believe that even if you can’t consider having “a lot of classes” a virtue, you can’t call it a drawback of any truly object-oriented code either. However, there may come a point when classes become a problem; let’s see when, how, and what to do about that.
There were a number of “rules” previously mentioned that, if applied, would obviously lead to a large number of classes, including: a) all public methods must be declared in interfaces; b) objects must not have more than four attributes (Section 2.1 of Elegant Objects); c) static methods are not allowed; d) constructors must be code-free; e) objects must expose fewer than five public methods (Section 3.1 of Elegant Objects).
The biggest concern, of course, is maintainability: “If, instead of 50 longer classes, we had 300 shorter ones, then the code would be way less readable.” This will most certainly happen if you design them wrong.
Types (or classes) in OOP constitute your vocabulary, which explains the world around your code—the world your code lives in. The richer the vocabulary, the more powerful your code. The more types you have, the better you can understand and explain the world.
If your vocabulary is big enough, you will say something like:
Read the book that is on the table.
With a much smaller vocabulary, the same phrase would sound like:
Do it with the thing that is on that thing.
Obviously, it’s easier to read and understand the first phrase. The same occurs with types in OOP: the more of them you have at your disposal, the more expressive, bright, and readable your code is.
Unfortunately, Java and many other languages are not designed with this concept in mind. Packages, modules, and namespaces don’t really help, and we usually end up with names like AbstractCookieValueMethodArgumentResolver (Spring) or CombineFileRecordReaderWrapper (Hadoop). We’re trying to pack as many semantics into class names as possible so their users won’t doubt for a second. Then we’re trying to put as many methods into one class as possible to make life easier for users; they will use their IDE hints to find the right one.
This is anything but OOP.
If your code is object-oriented, your classes must be small, their names must be nouns, and their method names must be just one word. Here is what I do in my code to make that happen:
Interfaces are nouns. For example, Request, Directive, or Domain. There are no exceptions. Types (also known as interfaces in Java) are the core part of my vocabulary; they have to be nouns.
Classes are prefixed. My classes always implement interfaces. Thanks to that, I can say they always are requests, directives, or domains. And I always want their users to remember that. Prefixes help. For example, RqBuffered is a buffered request, RqSimple is a simple request, RqLive is a request that represents a “live” HTTP connection, and RqWithHeader is a request with an extra header.
An alternative approach is to use the type name as the central part of the class name and add a prefix that explains implementation details. For example, DyDomain is a domain that persists its data in DynamoDB. Once you know what that Dy prefix is for, you can easily understand what DyUser and DyBase are about.
In a medium-sized application or a library, there will be as many as 10 to 15 prefixes you will have to remember, no more. For example, in the Takes Framework, there are 24,000 lines of code, 410 Java files, and 10 prefixes: Bc, Cc, Tk, Rq, Rs, Fb, Fk, Hm, Ps, and Xe. Not so difficult to remember what they mean, right?
Among all 240 classes, the longest name is RqWithDefaultHeader.
I find this approach to class naming rather convenient. I used it in these open source projects (in GitHub): yegor256/takes (10 prefixes), yegor256/jare (5 prefixes), yegor256/rultor (6 prefixes), and yegor256/wring (5 prefixes).
"/>
https://www.yegor256.com/2017/02/28/too-many-classes.html
How to Handle the Problem of Too Many Classes
- Odessa, Ukraine
- Yegor Bugayenko
- comments
During nearly every presentation in which I explain my view of object-oriented programming, there is someone who shares a comment like this: “If we follow your advice, we will have so many small classes.” And my answer is always the same: “Of course we will, and that’s great!” I honestly believe that even if you can’t consider having “a lot of classes” a virtue, you can’t call it a drawback of any truly object-oriented code either. However, there may come a point when classes become a problem; let’s see when, how, and what to do about that.

There were a number of “rules” previously mentioned that, if applied, would obviously lead to a large number of classes, including: a) all public methods must be declared in interfaces; b) objects must not have more than four attributes (Section 2.1 of Elegant Objects); c) static methods are not allowed; d) constructors must be code-free; e) objects must expose fewer than five public methods (Section 3.1 of Elegant Objects).
The biggest concern, of course, is maintainability: “If, instead of 50 longer classes, we had 300 shorter ones, then the code would be way less readable.” This will most certainly happen if you design them wrong.
Types (or classes) in OOP constitute your vocabulary, which explains the world around your code—the world your code lives in. The richer the vocabulary, the more powerful your code. The more types you have, the better you can understand and explain the world.
If your vocabulary is big enough, you will say something like:
Read the book that is on the table.
With a much smaller vocabulary, the same phrase would sound like:
Do it with the thing that is on that thing.
Obviously, it’s easier to read and understand the first phrase. The same occurs with types in OOP: the more of them you have at your disposal, the more expressive, bright, and readable your code is.
Unfortunately, Java and many other languages are not designed with this concept in mind. Packages, modules, and namespaces don’t really help, and we usually end up with names like AbstractCookieValueMethodArgumentResolver (Spring) or CombineFileRecordReaderWrapper (Hadoop). We’re trying to pack as many semantics into class names as possible so their users won’t doubt for a second. Then we’re trying to put as many methods into one class as possible to make life easier for users; they will use their IDE hints to find the right one.
This is anything but OOP.
If your code is object-oriented, your classes must be small, their names must be nouns, and their method names must be just one word. Here is what I do in my code to make that happen:
Interfaces are nouns. For example, Request, Directive, or Domain. There are no exceptions. Types (also known as interfaces in Java) are the core part of my vocabulary; they have to be nouns.
Classes are prefixed. My classes always implement interfaces. Thanks to that, I can say they always are requests, directives, or domains. And I always want their users to remember that. Prefixes help. For example, RqBuffered is a buffered request, RqSimple is a simple request, RqLive is a request that represents a “live” HTTP connection, and RqWithHeader is a request with an extra header.
An alternative approach is to use the type name as the central part of the class name and add a prefix that explains implementation details. For example, DyDomain is a domain that persists its data in DynamoDB. Once you know what that Dy prefix is for, you can easily understand what DyUser and DyBase are about.
In a medium-sized application or a library, there will be as many as 10 to 15 prefixes you will have to remember, no more. For example, in the Takes Framework, there are 24,000 lines of code, 410 Java files, and 10 prefixes: Bc, Cc, Tk, Rq, Rs, Fb, Fk, Hm, Ps, and Xe. Not so difficult to remember what they mean, right?
Among all 240 classes, the longest name is RqWithDefaultHeader.
I find this approach to class naming rather convenient. I used it in these open source projects (in GitHub): yegor256/takes (10 prefixes), yegor256/jare (5 prefixes), yegor256/rultor (6 prefixes), and yegor256/wring (5 prefixes).
During nearly every presentation in which I explain my view of object-oriented programming, there is someone who shares a comment like this: “If we follow your advice, we will have so many small classes.” And my answer is always the same: “Of course we will, and that’s great!” I honestly believe that even if you can’t consider having “a lot of classes” a virtue, you can’t call it a drawback of any truly object-oriented code either. However, there may come a point when classes become a problem; let’s see when, how, and what to do about that.

There were a number of “rules” previously mentioned that, if applied, would obviously lead to a large number of classes, including: a) all public methods must be declared in interfaces; b) objects must not have more than four attributes (Section 2.1 of Elegant Objects); c) static methods are not allowed; d) constructors must be code-free; e) objects must expose fewer than five public methods (Section 3.1 of Elegant Objects).
The biggest concern, of course, is maintainability: “If, instead of 50 longer classes, we had 300 shorter ones, then the code would be way less readable.” This will most certainly happen if you design them wrong.
Types (or classes) in OOP constitute your vocabulary, which explains the world around your code—the world your code lives in. The richer the vocabulary, the more powerful your code. The more types you have, the better you can understand and explain the world.
If your vocabulary is big enough, you will say something like:
Read the book that is on the table.
With a much smaller vocabulary, the same phrase would sound like:
Do it with the thing that is on that thing.
Obviously, it’s easier to read and understand the first phrase. The same occurs with types in OOP: the more of them you have at your disposal, the more expressive, bright, and readable your code is.
Unfortunately, Java and many other languages are not designed with this concept in mind. Packages, modules, and namespaces don’t really help, and we usually end up with names like AbstractCookieValueMethodArgumentResolver (Spring) or CombineFileRecordReaderWrapper (Hadoop). We’re trying to pack as many semantics into class names as possible so their users won’t doubt for a second. Then we’re trying to put as many methods into one class as possible to make life easier for users; they will use their IDE hints to find the right one.
This is anything but OOP.
If your code is object-oriented, your classes must be small, their names must be nouns, and their method names must be just one word. Here is what I do in my code to make that happen:
Interfaces are nouns. For example, Request, Directive, or Domain. There are no exceptions. Types (also known as interfaces in Java) are the core part of my vocabulary; they have to be nouns.
Classes are prefixed. My classes always implement interfaces. Thanks to that, I can say they always are requests, directives, or domains. And I always want their users to remember that. Prefixes help. For example, RqBuffered is a buffered request, RqSimple is a simple request, RqLive is a request that represents a “live” HTTP connection, and RqWithHeader is a request with an extra header.
An alternative approach is to use the type name as the central part of the class name and add a prefix that explains implementation details. For example, DyDomain is a domain that persists its data in DynamoDB. Once you know what that Dy prefix is for, you can easily understand what DyUser and DyBase are about.
In a medium-sized application or a library, there will be as many as 10 to 15 prefixes you will have to remember, no more. For example, in the Takes Framework, there are 24,000 lines of code, 410 Java files, and 10 prefixes: Bc, Cc, Tk, Rq, Rs, Fb, Fk, Hm, Ps, and Xe. Not so difficult to remember what they mean, right?
Among all 240 classes, the longest name is RqWithDefaultHeader.
I find this approach to class naming rather convenient. I used it in these open source projects (in GitHub): yegor256/takes (10 prefixes), yegor256/jare (5 prefixes), yegor256/rultor (6 prefixes), and yegor256/wring (5 prefixes).
Please, use syntax highlighting in your comments, to make them more readable.

Here is an example of a simple class:
class Token {
private String key;
private String secret;
String encoded() {
return "key="
+ URLEncoder.encode(key, "UTF-8")
+ "&secret="
+ URLEncoder.encode(secret, "UTF-8");
}
}There is an obvious code duplication, right? The easiest way to resolve it is to introduce a private static method:
class Token {
private String key;
private String secret;
String encoded() {
return "key="
+ Token.encoded(key)
+ "&secret="
+ Token.encoded(secret);
}
private static String encoded(String text) {
return URLEncoder.encode(text, "UTF-8");
}
}Looks much better now. But what will happen if we have another class that needs the exact same functionality? We will have to copy and paste this private static method encoded() into it, right?
A better alternative would be to introduce a new class Encoded that implements the functionality we want to share:
class Encoded {
private final String raw;
@Override
public String toString() {
return URLEncoder.encode(this.raw, "UTF-8");
}
}And then:
class Token {
private String key;
private String secret;
String encoded() {
return "key="
+ new Encoded(key)
+ "&secret="
+ new Encoded(secret);
}
}Now this functionality is 1) reusable, and 2) testable. We can easily use this class Encoded in many other places, and we can create a unit test for it. We were not able to do that with the private static method before.
See the point? The rule of thumb I’ve already figured for myself is that each private static method is a perfect candidate for a new class. That’s why we don’t have them at all in EO.
By the way, public static methods are a different story. They are also evil, but for different reasons.
P.S. Now I think that the reasons in this article are applicable to all private methods, not only static ones.
" />
Here is an example of a simple class:
class Token {
private String key;
private String secret;
String encoded() {
return "key="
+ URLEncoder.encode(key, "UTF-8")
+ "&secret="
+ URLEncoder.encode(secret, "UTF-8");
}
}There is an obvious code duplication, right? The easiest way to resolve it is to introduce a private static method:
class Token {
private String key;
private String secret;
String encoded() {
return "key="
+ Token.encoded(key)
+ "&secret="
+ Token.encoded(secret);
}
private static String encoded(String text) {
return URLEncoder.encode(text, "UTF-8");
}
}Looks much better now. But what will happen if we have another class that needs the exact same functionality? We will have to copy and paste this private static method encoded() into it, right?
A better alternative would be to introduce a new class Encoded that implements the functionality we want to share:
class Encoded {
private final String raw;
@Override
public String toString() {
return URLEncoder.encode(this.raw, "UTF-8");
}
}And then:
class Token {
private String key;
private String secret;
String encoded() {
return "key="
+ new Encoded(key)
+ "&secret="
+ new Encoded(secret);
}
}Now this functionality is 1) reusable, and 2) testable. We can easily use this class Encoded in many other places, and we can create a unit test for it. We were not able to do that with the private static method before.
See the point? The rule of thumb I’ve already figured for myself is that each private static method is a perfect candidate for a new class. That’s why we don’t have them at all in EO.
By the way, public static methods are a different story. They are also evil, but for different reasons.
P.S. Now I think that the reasons in this article are applicable to all private methods, not only static ones.
"/>
https://www.yegor256.com/2017/02/07/private-method-is-new-class.html
Each Private Static Method Is a Candidate for a New Class
- Kharkiv, Ukraine
- Yegor Bugayenko
- comments
Do you have private static methods that help you break your algorithms down into smaller parts? I do. Every time I write a new method, I realize that it can be a new class instead. Of course, I don’t make classes out of all of them, but that has to be the goal. Private static methods are not reusable, while classes are—that is the main difference between them, and it is crucial.

Here is an example of a simple class:
class Token {
private String key;
private String secret;
String encoded() {
return "key="
+ URLEncoder.encode(key, "UTF-8")
+ "&secret="
+ URLEncoder.encode(secret, "UTF-8");
}
}There is an obvious code duplication, right? The easiest way to resolve it is to introduce a private static method:
class Token {
private String key;
private String secret;
String encoded() {
return "key="
+ Token.encoded(key)
+ "&secret="
+ Token.encoded(secret);
}
private static String encoded(String text) {
return URLEncoder.encode(text, "UTF-8");
}
}Looks much better now. But what will happen if we have another class that needs the exact same functionality? We will have to copy and paste this private static method encoded() into it, right?
A better alternative would be to introduce a new class Encoded that implements the functionality we want to share:
class Encoded {
private final String raw;
@Override
public String toString() {
return URLEncoder.encode(this.raw, "UTF-8");
}
}And then:
class Token {
private String key;
private String secret;
String encoded() {
return "key="
+ new Encoded(key)
+ "&secret="
+ new Encoded(secret);
}
}Now this functionality is 1) reusable, and 2) testable. We can easily use this class Encoded in many other places, and we can create a unit test for it. We were not able to do that with the private static method before.
See the point? The rule of thumb I’ve already figured for myself is that each private static method is a perfect candidate for a new class. That’s why we don’t have them at all in EO.
By the way, public static methods are a different story. They are also evil, but for different reasons.
P.S. Now I think that the reasons in this article are applicable to all private methods, not only static ones.
Do you have private static methods that help you break your algorithms down into smaller parts? I do. Every time I write a new method, I realize that it can be a new class instead. Of course, I don’t make classes out of all of them, but that has to be the goal. Private static methods are not reusable, while classes are—that is the main difference between them, and it is crucial.

Here is an example of a simple class:
class Token {
private String key;
private String secret;
String encoded() {
return "key="
+ URLEncoder.encode(key, "UTF-8")
+ "&secret="
+ URLEncoder.encode(secret, "UTF-8");
}
}There is an obvious code duplication, right? The easiest way to resolve it is to introduce a private static method:
class Token {
private String key;
private String secret;
String encoded() {
return "key="
+ Token.encoded(key)
+ "&secret="
+ Token.encoded(secret);
}
private static String encoded(String text) {
return URLEncoder.encode(text, "UTF-8");
}
}Looks much better now. But what will happen if we have another class that needs the exact same functionality? We will have to copy and paste this private static method encoded() into it, right?
A better alternative would be to introduce a new class Encoded that implements the functionality we want to share:
class Encoded {
private final String raw;
@Override
public String toString() {
return URLEncoder.encode(this.raw, "UTF-8");
}
}And then:
class Token {
private String key;
private String secret;
String encoded() {
return "key="
+ new Encoded(key)
+ "&secret="
+ new Encoded(secret);
}
}Now this functionality is 1) reusable, and 2) testable. We can easily use this class Encoded in many other places, and we can create a unit test for it. We were not able to do that with the private static method before.
See the point? The rule of thumb I’ve already figured for myself is that each private static method is a perfect candidate for a new class. That’s why we don’t have them at all in EO.
By the way, public static methods are a different story. They are also evil, but for different reasons.
P.S. Now I think that the reasons in this article are applicable to all private methods, not only static ones.
Please, use syntax highlighting in your comments, to make them more readable.
*Wrap. It’s a convenient design concept that, unfortunately, looks rather verbose in Java. It would be great to have something shorter, like in EO for example.
Take a look at RsHtml from Takes Framework. Its design looks like this (a simplified version with only one primary constructor):
class RsHtml extends RsWrap {
RsHtml(final String text) {
super(
new RsWithType(
new RsWithStatus(text, 200),
"text/html"
)
);
}
}Now, let’s take a look at that RsWrap it extends:
public class RsWrap implements Response {
private final Response origin;
public RsWrap(final Response res) {
this.origin = res;
}
@Override
public final Iterable<String> head() {
return this.origin.head();
}
@Override
public final InputStream body() {
return this.origin.body();
}
}As you see, this “decorator” doesn’t do anything except “just decorating.” It encapsulates another Response and passes through all method calls.
If it’s not clear yet, I’ll explain the purpose of RsHtml. Let’s say you have text and you want to create a Response:
String text = // you have it already
Response response = new RsWithType(
new RsWithStatus(text, HttpURLConnection.HTTP_OK),
"text/html"
);Instead of doing this composition of decorators over and over again in many places, you use RsHtml:
String text = // you have it already
Response response = new RsHtml(text);It is very convenient, but that RsWrap is very verbose. There are too many lines that don’t do anything special; they just forward all method calls to the encapsulated Response.
How about we introduce a new concept, “decorators,” with a new keyword, decorates:
class RsHtml decorates Response {
RsHtml(final String text) {
this(
new RsWithType(
new RsWithStatus(text, 200),
"text/html"
)
)
}
}Then, in order to create an object, we just call:
Response response = new RsHtml(text);We don’t have any new methods in the decorators, just constructors. The only purpose for these guys is to create other objects and encapsulate them. They are not really full-purpose objects. They only help us create other objects.
That’s why I would call them “decorating envelopes.”
This idea may look very similar to the Factory design pattern, but it doesn’t have static methods, which we are trying to avoid in object-oriented programming.
" /> class that implements an interface by making an instance of another class. Sound weird? Let me show you an example. There are many classes of that kind in the Takes Framework, and they all are named like*Wrap. It’s a convenient design concept that, unfortunately, looks rather verbose in Java. It would be great to have something shorter, like in EO for example.
Take a look at RsHtml from Takes Framework. Its design looks like this (a simplified version with only one primary constructor):
class RsHtml extends RsWrap {
RsHtml(final String text) {
super(
new RsWithType(
new RsWithStatus(text, 200),
"text/html"
)
);
}
}Now, let’s take a look at that RsWrap it extends:
public class RsWrap implements Response {
private final Response origin;
public RsWrap(final Response res) {
this.origin = res;
}
@Override
public final Iterable<String> head() {
return this.origin.head();
}
@Override
public final InputStream body() {
return this.origin.body();
}
}As you see, this “decorator” doesn’t do anything except “just decorating.” It encapsulates another Response and passes through all method calls.
If it’s not clear yet, I’ll explain the purpose of RsHtml. Let’s say you have text and you want to create a Response:
String text = // you have it already
Response response = new RsWithType(
new RsWithStatus(text, HttpURLConnection.HTTP_OK),
"text/html"
);Instead of doing this composition of decorators over and over again in many places, you use RsHtml:
String text = // you have it already
Response response = new RsHtml(text);It is very convenient, but that RsWrap is very verbose. There are too many lines that don’t do anything special; they just forward all method calls to the encapsulated Response.
How about we introduce a new concept, “decorators,” with a new keyword, decorates:
class RsHtml decorates Response {
RsHtml(final String text) {
this(
new RsWithType(
new RsWithStatus(text, 200),
"text/html"
)
)
}
}Then, in order to create an object, we just call:
Response response = new RsHtml(text);We don’t have any new methods in the decorators, just constructors. The only purpose for these guys is to create other objects and encapsulate them. They are not really full-purpose objects. They only help us create other objects.
That’s why I would call them “decorating envelopes.”
This idea may look very similar to the Factory design pattern, but it doesn’t have static methods, which we are trying to avoid in object-oriented programming.
"/>
https://www.yegor256.com/2017/01/31/decorating-envelopes.html
Decorating Envelopes
- Lviv, Ukraine
- Yegor Bugayenko
- comments
Sometimes Very often I need a class that implements an interface by making an instance of another class. Sound weird? Let me show you an example. There are many classes of that kind in the Takes Framework, and they all are named like *Wrap. It’s a convenient design concept that, unfortunately, looks rather verbose in Java. It would be great to have something shorter, like in EO for example.

Take a look at RsHtml from Takes Framework. Its design looks like this (a simplified version with only one primary constructor):
class RsHtml extends RsWrap {
RsHtml(final String text) {
super(
new RsWithType(
new RsWithStatus(text, 200),
"text/html"
)
);
}
}Now, let’s take a look at that RsWrap it extends:
public class RsWrap implements Response {
private final Response origin;
public RsWrap(final Response res) {
this.origin = res;
}
@Override
public final Iterable<String> head() {
return this.origin.head();
}
@Override
public final InputStream body() {
return this.origin.body();
}
}As you see, this “decorator” doesn’t do anything except “just decorating.” It encapsulates another Response and passes through all method calls.
If it’s not clear yet, I’ll explain the purpose of RsHtml. Let’s say you have text and you want to create a Response:
String text = // you have it already
Response response = new RsWithType(
new RsWithStatus(text, HttpURLConnection.HTTP_OK),
"text/html"
);Instead of doing this composition of decorators over and over again in many places, you use RsHtml:
String text = // you have it already
Response response = new RsHtml(text);It is very convenient, but that RsWrap is very verbose. There are too many lines that don’t do anything special; they just forward all method calls to the encapsulated Response.
How about we introduce a new concept, “decorators,” with a new keyword, decorates:
class RsHtml decorates Response {
RsHtml(final String text) {
this(
new RsWithType(
new RsWithStatus(text, 200),
"text/html"
)
)
}
}Then, in order to create an object, we just call:
Response response = new RsHtml(text);We don’t have any new methods in the decorators, just constructors. The only purpose for these guys is to create other objects and encapsulate them. They are not really full-purpose objects. They only help us create other objects.
That’s why I would call them “decorating envelopes.”
This idea may look very similar to the Factory design pattern, but it doesn’t have static methods, which we are trying to avoid in object-oriented programming.
Sometimes Very often I need a class that implements an interface by making an instance of another class. Sound weird? Let me show you an example. There are many classes of that kind in the Takes Framework, and they all are named like *Wrap. It’s a convenient design concept that, unfortunately, looks rather verbose in Java. It would be great to have something shorter, like in EO for example.

Take a look at RsHtml from Takes Framework. Its design looks like this (a simplified version with only one primary constructor):
class RsHtml extends RsWrap {
RsHtml(final String text) {
super(
new RsWithType(
new RsWithStatus(text, 200),
"text/html"
)
);
}
}Now, let’s take a look at that RsWrap it extends:
public class RsWrap implements Response {
private final Response origin;
public RsWrap(final Response res) {
this.origin = res;
}
@Override
public final Iterable<String> head() {
return this.origin.head();
}
@Override
public final InputStream body() {
return this.origin.body();
}
}As you see, this “decorator” doesn’t do anything except “just decorating.” It encapsulates another Response and passes through all method calls.
If it’s not clear yet, I’ll explain the purpose of RsHtml. Let’s say you have text and you want to create a Response:
String text = // you have it already
Response response = new RsWithType(
new RsWithStatus(text, HttpURLConnection.HTTP_OK),
"text/html"
);Instead of doing this composition of decorators over and over again in many places, you use RsHtml:
String text = // you have it already
Response response = new RsHtml(text);It is very convenient, but that RsWrap is very verbose. There are too many lines that don’t do anything special; they just forward all method calls to the encapsulated Response.
How about we introduce a new concept, “decorators,” with a new keyword, decorates:
class RsHtml decorates Response {
RsHtml(final String text) {
this(
new RsWithType(
new RsWithStatus(text, 200),
"text/html"
)
)
}
}Then, in order to create an object, we just call:
Response response = new RsHtml(text);We don’t have any new methods in the decorators, just constructors. The only purpose for these guys is to create other objects and encapsulate them. They are not really full-purpose objects. They only help us create other objects.
That’s why I would call them “decorating envelopes.”
This idea may look very similar to the Factory design pattern, but it doesn’t have static methods, which we are trying to avoid in object-oriented programming.
Please, use syntax highlighting in your comments, to make them more readable.

Let’s start with an example (it’s mutable, by the way):
class Position {
private int number = 0;
@Override
public void increment() {
int before = this.number;
int after = before + 1;
this.number = after;
}
}What do you think—is it thread-safe? This term refers to whether an object of this class will operate without mistakes when used by multiple threads at the same time. Let’s say we have two threads working with the same object, position, and calling its method increment() at exactly the same moment in time.
We expect the number integer to be equal to 2 when both threads finish up, because each of them will increment it once, right? However, most likely this won’t happen.
Let’s see what will happen. In both threads, before will equal 0 when they start. Then after will be set to 1. Then, both threads will do this.number = 1 and we will end up with 1 in number instead of the expected 2. See the problem? Classes with such a flaw in their design are not thread-safe.
The simplest and most obvious solution is to make our method synchronized. That will guarantee that no matter how many threads call it at the same time, they will all go sequentially, not in parallel: one thread after another. Of course, it will take longer, but it will prevent that mistake from happening:
class Position {
private int number = 0;
@Override
public synchronized void increment() {
int before = this.number;
int after = before + 1;
this.number = after;
}
}A class that guarantees it won’t break no matter how many threads are working with it is called thread-safe.
Now the question is: Do we have to make all classes thread-safe or only some of them? It would seem to be better to have all classes error-free, right? Why would anyone want an object that may break at some point? Well, not exactly. Remember, there is a performance concern involved; we don’t often have multiple threads, and we always want our objects to run as fast as possible. A between-threads synchronization mechanism will definitely slow us down.
I think the right approach is to have two classes. The first one is not thread-safe, while the other one is a synchronized decorator, which would look like this:
class SyncPosition implements Position {
private final Position origin;
SyncPosition(Position pos) {
this.origin = pos;
}
@Override
public synchronized void increment() {
this.origin.increment();
}
}Now, when we need our position object to be thread-safe, we decorate it with SyncPosition:
Position position = new SyncPosition(
new SimplePosition()
);When we need a plain simple position, without any thread safety, we do this:
Position position = new SimplePosition();Making class functionality both rich and thread-safe is, in my opinion, a violation of that famous single responsibility principle.
By the way, this problem is very close to the one of defensive programming and validators.
" />
Let’s start with an example (it’s mutable, by the way):
class Position {
private int number = 0;
@Override
public void increment() {
int before = this.number;
int after = before + 1;
this.number = after;
}
}What do you think—is it thread-safe? This term refers to whether an object of this class will operate without mistakes when used by multiple threads at the same time. Let’s say we have two threads working with the same object, position, and calling its method increment() at exactly the same moment in time.
We expect the number integer to be equal to 2 when both threads finish up, because each of them will increment it once, right? However, most likely this won’t happen.
Let’s see what will happen. In both threads, before will equal 0 when they start. Then after will be set to 1. Then, both threads will do this.number = 1 and we will end up with 1 in number instead of the expected 2. See the problem? Classes with such a flaw in their design are not thread-safe.
The simplest and most obvious solution is to make our method synchronized. That will guarantee that no matter how many threads call it at the same time, they will all go sequentially, not in parallel: one thread after another. Of course, it will take longer, but it will prevent that mistake from happening:
class Position {
private int number = 0;
@Override
public synchronized void increment() {
int before = this.number;
int after = before + 1;
this.number = after;
}
}A class that guarantees it won’t break no matter how many threads are working with it is called thread-safe.
Now the question is: Do we have to make all classes thread-safe or only some of them? It would seem to be better to have all classes error-free, right? Why would anyone want an object that may break at some point? Well, not exactly. Remember, there is a performance concern involved; we don’t often have multiple threads, and we always want our objects to run as fast as possible. A between-threads synchronization mechanism will definitely slow us down.
I think the right approach is to have two classes. The first one is not thread-safe, while the other one is a synchronized decorator, which would look like this:
class SyncPosition implements Position {
private final Position origin;
SyncPosition(Position pos) {
this.origin = pos;
}
@Override
public synchronized void increment() {
this.origin.increment();
}
}Now, when we need our position object to be thread-safe, we decorate it with SyncPosition:
Position position = new SyncPosition(
new SimplePosition()
);When we need a plain simple position, without any thread safety, we do this:
Position position = new SimplePosition();Making class functionality both rich and thread-safe is, in my opinion, a violation of that famous single responsibility principle.
By the way, this problem is very close to the one of defensive programming and validators.
"/>
https://www.yegor256.com/2017/01/17/synchronized-decorators.html
Synchronized Decorators to Replace Thread-Safe Classes
- Odessa, Ukraine
- Yegor Bugayenko
- comments
You know what thread safety is, right? If not, there is a simple example below. All classes must be thread-safe, right? Not really. Some of them have to be thread-safe? Wrong again. I think none of them have to be thread-safe, while all of them have to provide synchronized decorators.

Let’s start with an example (it’s mutable, by the way):
class Position {
private int number = 0;
@Override
public void increment() {
int before = this.number;
int after = before + 1;
this.number = after;
}
}What do you think—is it thread-safe? This term refers to whether an object of this class will operate without mistakes when used by multiple threads at the same time. Let’s say we have two threads working with the same object, position, and calling its method increment() at exactly the same moment in time.
We expect the number integer to be equal to 2 when both threads finish up, because each of them will increment it once, right? However, most likely this won’t happen.
Let’s see what will happen. In both threads, before will equal 0 when they start. Then after will be set to 1. Then, both threads will do this.number = 1 and we will end up with 1 in number instead of the expected 2. See the problem? Classes with such a flaw in their design are not thread-safe.
The simplest and most obvious solution is to make our method synchronized. That will guarantee that no matter how many threads call it at the same time, they will all go sequentially, not in parallel: one thread after another. Of course, it will take longer, but it will prevent that mistake from happening:
class Position {
private int number = 0;
@Override
public synchronized void increment() {
int before = this.number;
int after = before + 1;
this.number = after;
}
}A class that guarantees it won’t break no matter how many threads are working with it is called thread-safe.
Now the question is: Do we have to make all classes thread-safe or only some of them? It would seem to be better to have all classes error-free, right? Why would anyone want an object that may break at some point? Well, not exactly. Remember, there is a performance concern involved; we don’t often have multiple threads, and we always want our objects to run as fast as possible. A between-threads synchronization mechanism will definitely slow us down.
I think the right approach is to have two classes. The first one is not thread-safe, while the other one is a synchronized decorator, which would look like this:
class SyncPosition implements Position {
private final Position origin;
SyncPosition(Position pos) {
this.origin = pos;
}
@Override
public synchronized void increment() {
this.origin.increment();
}
}Now, when we need our position object to be thread-safe, we decorate it with SyncPosition:
Position position = new SyncPosition(
new SimplePosition()
);When we need a plain simple position, without any thread safety, we do this:
Position position = new SimplePosition();Making class functionality both rich and thread-safe is, in my opinion, a violation of that famous single responsibility principle.
By the way, this problem is very close to the one of defensive programming and validators.
You know what thread safety is, right? If not, there is a simple example below. All classes must be thread-safe, right? Not really. Some of them have to be thread-safe? Wrong again. I think none of them have to be thread-safe, while all of them have to provide synchronized decorators.

Let’s start with an example (it’s mutable, by the way):
class Position {
private int number = 0;
@Override
public void increment() {
int before = this.number;
int after = before + 1;
this.number = after;
}
}What do you think—is it thread-safe? This term refers to whether an object of this class will operate without mistakes when used by multiple threads at the same time. Let’s say we have two threads working with the same object, position, and calling its method increment() at exactly the same moment in time.
We expect the number integer to be equal to 2 when both threads finish up, because each of them will increment it once, right? However, most likely this won’t happen.
Let’s see what will happen. In both threads, before will equal 0 when they start. Then after will be set to 1. Then, both threads will do this.number = 1 and we will end up with 1 in number instead of the expected 2. See the problem? Classes with such a flaw in their design are not thread-safe.
The simplest and most obvious solution is to make our method synchronized. That will guarantee that no matter how many threads call it at the same time, they will all go sequentially, not in parallel: one thread after another. Of course, it will take longer, but it will prevent that mistake from happening:
class Position {
private int number = 0;
@Override
public synchronized void increment() {
int before = this.number;
int after = before + 1;
this.number = after;
}
}A class that guarantees it won’t break no matter how many threads are working with it is called thread-safe.
Now the question is: Do we have to make all classes thread-safe or only some of them? It would seem to be better to have all classes error-free, right? Why would anyone want an object that may break at some point? Well, not exactly. Remember, there is a performance concern involved; we don’t often have multiple threads, and we always want our objects to run as fast as possible. A between-threads synchronization mechanism will definitely slow us down.
I think the right approach is to have two classes. The first one is not thread-safe, while the other one is a synchronized decorator, which would look like this:
class SyncPosition implements Position {
private final Position origin;
SyncPosition(Position pos) {
this.origin = pos;
}
@Override
public synchronized void increment() {
this.origin.increment();
}
}Now, when we need our position object to be thread-safe, we decorate it with SyncPosition:
Position position = new SyncPosition(
new SimplePosition()
);When we need a plain simple position, without any thread safety, we do this:
Position position = new SimplePosition();Making class functionality both rich and thread-safe is, in my opinion, a violation of that famous single responsibility principle.
By the way, this problem is very close to the one of defensive programming and validators.
Please, use syntax highlighting in your comments, to make them more readable.

Say that this is our object:
class Temperature {
private int t;
public String toString() {
return String.format("%d C", this.t);
}
}It represents a temperature. The only behavior it exposes is printing the temperature in Celsius. We don’t want to expose t, because that will lead to the “naked data” problem. We want to keep t secret, and that’s a good desire.
Now, we want to have the ability to print temperature in Fahrenheit. The most obvious approach would be to introduce another method, toFahrenheitString(), or add a Boolean flag to the object, which will change the behavior of method toString(), right? Either one of these solutions is better than adding a method getT(), but neither one is perfect.
What if we create this decorator:
class TempFahrenheit implements Temperature {
private TempCelsius origin;
public String toString() {
return String.format(
"%d F", this.origin.t * 1.8 + 32
);
}
}It should work just great:
Temperature t = new TempFahrenheit(
new TempCelsius(35)
);The only problem is that it won’t compile in Java, because class TempFahrenheit is not allowed to access private t in class TempCelsius. And if we make t public, everybody will be able to read it directly, and we’ll have that “naked data” problem—a severe violation of encapsulation.
However, if we allow that access only to one class, everything will be fine. Something like this (won’t work in Java; it’s just a concept):
class TempCelsius {
trust TempFahrenheit; // here!
private int t;
public String toString() {
return String.format("%d C", this.t);
}
}Since this trust keyword is placed into the class that allows access, we won’t have the “naked data” problem—we will always know exactly which objects posses knowledge about t. When we change something about t, we know exactly where to update the code.
What do you think?
P.S. After discussing this idea below in comments I started to think that we don’t need that trust keyword at all. Instead, we should just give all decorators access to all private attributes of an object.

Say that this is our object:
class Temperature {
private int t;
public String toString() {
return String.format("%d C", this.t);
}
}It represents a temperature. The only behavior it exposes is printing the temperature in Celsius. We don’t want to expose t, because that will lead to the “naked data” problem. We want to keep t secret, and that’s a good desire.
Now, we want to have the ability to print temperature in Fahrenheit. The most obvious approach would be to introduce another method, toFahrenheitString(), or add a Boolean flag to the object, which will change the behavior of method toString(), right? Either one of these solutions is better than adding a method getT(), but neither one is perfect.
What if we create this decorator:
class TempFahrenheit implements Temperature {
private TempCelsius origin;
public String toString() {
return String.format(
"%d F", this.origin.t * 1.8 + 32
);
}
}It should work just great:
Temperature t = new TempFahrenheit(
new TempCelsius(35)
);The only problem is that it won’t compile in Java, because class TempFahrenheit is not allowed to access private t in class TempCelsius. And if we make t public, everybody will be able to read it directly, and we’ll have that “naked data” problem—a severe violation of encapsulation.
However, if we allow that access only to one class, everything will be fine. Something like this (won’t work in Java; it’s just a concept):
class TempCelsius {
trust TempFahrenheit; // here!
private int t;
public String toString() {
return String.format("%d C", this.t);
}
}Since this trust keyword is placed into the class that allows access, we won’t have the “naked data” problem—we will always know exactly which objects posses knowledge about t. When we change something about t, we know exactly where to update the code.
What do you think?
P.S. After discussing this idea below in comments I started to think that we don’t need that trust keyword at all. Instead, we should just give all decorators access to all private attributes of an object.
https://www.yegor256.com/2016/12/20/can-objects-be-friends.html
Can Objects Be Friends?
- Moscow, Russia
- Yegor Bugayenko
- comments
As discussed before, proper encapsulation leads to a complete absence of “naked data.” However, the question remains: How can objects interact if they can’t exchange data? Eventually we have to expose some data in order to let other objects use it, right? Yes, that’s true. However, I guess I have a solution that keeps encapsulation in place while allowing objects to interact.

Say that this is our object:
class Temperature {
private int t;
public String toString() {
return String.format("%d C", this.t);
}
}It represents a temperature. The only behavior it exposes is printing the temperature in Celsius. We don’t want to expose t, because that will lead to the “naked data” problem. We want to keep t secret, and that’s a good desire.
Now, we want to have the ability to print temperature in Fahrenheit. The most obvious approach would be to introduce another method, toFahrenheitString(), or add a Boolean flag to the object, which will change the behavior of method toString(), right? Either one of these solutions is better than adding a method getT(), but neither one is perfect.
What if we create this decorator:
class TempFahrenheit implements Temperature {
private TempCelsius origin;
public String toString() {
return String.format(
"%d F", this.origin.t * 1.8 + 32
);
}
}It should work just great:
Temperature t = new TempFahrenheit(
new TempCelsius(35)
);The only problem is that it won’t compile in Java, because class TempFahrenheit is not allowed to access private t in class TempCelsius. And if we make t public, everybody will be able to read it directly, and we’ll have that “naked data” problem—a severe violation of encapsulation.
However, if we allow that access only to one class, everything will be fine. Something like this (won’t work in Java; it’s just a concept):
class TempCelsius {
trust TempFahrenheit; // here!
private int t;
public String toString() {
return String.format("%d C", this.t);
}
}Since this trust keyword is placed into the class that allows access, we won’t have the “naked data” problem—we will always know exactly which objects posses knowledge about t. When we change something about t, we know exactly where to update the code.
What do you think?
P.S. After discussing this idea below in comments I started to think that we don’t need that trust keyword at all. Instead, we should just give all decorators access to all private attributes of an object.
As discussed before, proper encapsulation leads to a complete absence of “naked data.” However, the question remains: How can objects interact if they can’t exchange data? Eventually we have to expose some data in order to let other objects use it, right? Yes, that’s true. However, I guess I have a solution that keeps encapsulation in place while allowing objects to interact.

Say that this is our object:
class Temperature {
private int t;
public String toString() {
return String.format("%d C", this.t);
}
}It represents a temperature. The only behavior it exposes is printing the temperature in Celsius. We don’t want to expose t, because that will lead to the “naked data” problem. We want to keep t secret, and that’s a good desire.
Now, we want to have the ability to print temperature in Fahrenheit. The most obvious approach would be to introduce another method, toFahrenheitString(), or add a Boolean flag to the object, which will change the behavior of method toString(), right? Either one of these solutions is better than adding a method getT(), but neither one is perfect.
What if we create this decorator:
class TempFahrenheit implements Temperature {
private TempCelsius origin;
public String toString() {
return String.format(
"%d F", this.origin.t * 1.8 + 32
);
}
}It should work just great:
Temperature t = new TempFahrenheit(
new TempCelsius(35)
);The only problem is that it won’t compile in Java, because class TempFahrenheit is not allowed to access private t in class TempCelsius. And if we make t public, everybody will be able to read it directly, and we’ll have that “naked data” problem—a severe violation of encapsulation.
However, if we allow that access only to one class, everything will be fine. Something like this (won’t work in Java; it’s just a concept):
class TempCelsius {
trust TempFahrenheit; // here!
private int t;
public String toString() {
return String.format("%d C", this.t);
}
}Since this trust keyword is placed into the class that allows access, we won’t have the “naked data” problem—we will always know exactly which objects posses knowledge about t. When we change something about t, we know exactly where to update the code.
What do you think?
P.S. After discussing this idea below in comments I started to think that we don’t need that trust keyword at all. Instead, we should just give all decorators access to all private attributes of an object.
Please, use syntax highlighting in your comments, to make them more readable.

This is how MVC architecture looks:
Controller is in charge, taking care of the data received from Model and injecting it into View—and this is exactly the problem. The data escapes the Model and becomes “naked,” which is a big problem, as we agreed earlier. OOP is all about encapsulation—data hiding.
MVC architecture does exactly the opposite by exposing the data and hiding behavior. The controller deals with the data directly, making decisions about its purpose and properties, while the objects, which are supposed to know everything about the data and hide it, remain anemic. That is exactly the principle any procedural architecture is built upon; the code is in charge of the data. Take this C++ code, for example:
void print_speed() { // controller
int s = load_from_engine(); // model
printf("The speed is %d mph", s); // view
}The function print_speed() is the controller. It gets the data s from the model load_from_engine() and renders it via the view printf(). Only the controller knows that the data is in miles per hour. The engine returns int without any properties. The controller simply assumed that that data is in mph. If we want to create a similar controller somewhere else, we will have to make a similar assumption again and again. That’s what the “naked data” problem is about, and it leads to serious maintainability issues.
This is an object-oriented alternative to the code above (pseudo-C++):
printf(
new PrintedSpeed( // view
new FormattedSpeed( // controller
new SpeedFromEngine() // model
)
)
);Here, SpeedFromEngine.speed() returns speed in mph, as an integer; FormattedSpeed.speed() returns "%d mph"; and finally, PrintedSpeed.to_str() returns the full text of the message. We can call them “model, view, and controller,” but in reality they are just objects decorating each other. It’s still the same entity—the speed. But it gets more complex and intelligent by being decorated.
We don’t tear the concept of speed apart. The speed is the speed, no matter who works with it and where it is presented. It just gets new behavior from decorators. It grows, but never falls apart.
To summarize, Controller is a pure procedural component in the MVC trio, which turns Model into a passive data holder and View into a passive data renderer. The controller, the holder, the renderer … Is it really OOP?
" /> Model-View-Controller (MVC) is an architectural pattern we all are well aware of. It’s a de-facto standard for almost all UI and Web frameworks. It is convenient and easy to use. It is simple and effective. It is a great concept … for a procedural programmer. If your software is object-oriented, you should dislike MVC as much as I do. Here is why.
This is how MVC architecture looks:
Controller is in charge, taking care of the data received from Model and injecting it into View—and this is exactly the problem. The data escapes the Model and becomes “naked,” which is a big problem, as we agreed earlier. OOP is all about encapsulation—data hiding.
MVC architecture does exactly the opposite by exposing the data and hiding behavior. The controller deals with the data directly, making decisions about its purpose and properties, while the objects, which are supposed to know everything about the data and hide it, remain anemic. That is exactly the principle any procedural architecture is built upon; the code is in charge of the data. Take this C++ code, for example:
void print_speed() { // controller
int s = load_from_engine(); // model
printf("The speed is %d mph", s); // view
}The function print_speed() is the controller. It gets the data s from the model load_from_engine() and renders it via the view printf(). Only the controller knows that the data is in miles per hour. The engine returns int without any properties. The controller simply assumed that that data is in mph. If we want to create a similar controller somewhere else, we will have to make a similar assumption again and again. That’s what the “naked data” problem is about, and it leads to serious maintainability issues.
This is an object-oriented alternative to the code above (pseudo-C++):
printf(
new PrintedSpeed( // view
new FormattedSpeed( // controller
new SpeedFromEngine() // model
)
)
);Here, SpeedFromEngine.speed() returns speed in mph, as an integer; FormattedSpeed.speed() returns "%d mph"; and finally, PrintedSpeed.to_str() returns the full text of the message. We can call them “model, view, and controller,” but in reality they are just objects decorating each other. It’s still the same entity—the speed. But it gets more complex and intelligent by being decorated.
We don’t tear the concept of speed apart. The speed is the speed, no matter who works with it and where it is presented. It just gets new behavior from decorators. It grows, but never falls apart.
To summarize, Controller is a pure procedural component in the MVC trio, which turns Model into a passive data holder and View into a passive data renderer. The controller, the holder, the renderer … Is it really OOP?
"/>
https://www.yegor256.com/2016/12/13/mvc-vs-oop.html
MVC vs. OOP
- Kiev, Ukraine
- Yegor Bugayenko
- comments
- Discussed at:
- dzone
Model-View-Controller (MVC) is an architectural pattern we all are well aware of. It’s a de-facto standard for almost all UI and Web frameworks. It is convenient and easy to use. It is simple and effective. It is a great concept … for a procedural programmer. If your software is object-oriented, you should dislike MVC as much as I do. Here is why.

This is how MVC architecture looks:
Controller is in charge, taking care of the data received from Model and injecting it into View—and this is exactly the problem. The data escapes the Model and becomes “naked,” which is a big problem, as we agreed earlier. OOP is all about encapsulation—data hiding.
MVC architecture does exactly the opposite by exposing the data and hiding behavior. The controller deals with the data directly, making decisions about its purpose and properties, while the objects, which are supposed to know everything about the data and hide it, remain anemic. That is exactly the principle any procedural architecture is built upon; the code is in charge of the data. Take this C++ code, for example:
void print_speed() { // controller
int s = load_from_engine(); // model
printf("The speed is %d mph", s); // view
}The function print_speed() is the controller. It gets the data s from the model load_from_engine() and renders it via the view printf(). Only the controller knows that the data is in miles per hour. The engine returns int without any properties. The controller simply assumed that that data is in mph. If we want to create a similar controller somewhere else, we will have to make a similar assumption again and again. That’s what the “naked data” problem is about, and it leads to serious maintainability issues.
This is an object-oriented alternative to the code above (pseudo-C++):
printf(
new PrintedSpeed( // view
new FormattedSpeed( // controller
new SpeedFromEngine() // model
)
)
);Here, SpeedFromEngine.speed() returns speed in mph, as an integer; FormattedSpeed.speed() returns "%d mph"; and finally, PrintedSpeed.to_str() returns the full text of the message. We can call them “model, view, and controller,” but in reality they are just objects decorating each other. It’s still the same entity—the speed. But it gets more complex and intelligent by being decorated.
We don’t tear the concept of speed apart. The speed is the speed, no matter who works with it and where it is presented. It just gets new behavior from decorators. It grows, but never falls apart.
To summarize, Controller is a pure procedural component in the MVC trio, which turns Model into a passive data holder and View into a passive data renderer. The controller, the holder, the renderer … Is it really OOP?
Model-View-Controller (MVC) is an architectural pattern we all are well aware of. It’s a de-facto standard for almost all UI and Web frameworks. It is convenient and easy to use. It is simple and effective. It is a great concept … for a procedural programmer. If your software is object-oriented, you should dislike MVC as much as I do. Here is why.

This is how MVC architecture looks:
Controller is in charge, taking care of the data received from Model and injecting it into View—and this is exactly the problem. The data escapes the Model and becomes “naked,” which is a big problem, as we agreed earlier. OOP is all about encapsulation—data hiding.
MVC architecture does exactly the opposite by exposing the data and hiding behavior. The controller deals with the data directly, making decisions about its purpose and properties, while the objects, which are supposed to know everything about the data and hide it, remain anemic. That is exactly the principle any procedural architecture is built upon; the code is in charge of the data. Take this C++ code, for example:
void print_speed() { // controller
int s = load_from_engine(); // model
printf("The speed is %d mph", s); // view
}The function print_speed() is the controller. It gets the data s from the model load_from_engine() and renders it via the view printf(). Only the controller knows that the data is in miles per hour. The engine returns int without any properties. The controller simply assumed that that data is in mph. If we want to create a similar controller somewhere else, we will have to make a similar assumption again and again. That’s what the “naked data” problem is about, and it leads to serious maintainability issues.
This is an object-oriented alternative to the code above (pseudo-C++):
printf(
new PrintedSpeed( // view
new FormattedSpeed( // controller
new SpeedFromEngine() // model
)
)
);Here, SpeedFromEngine.speed() returns speed in mph, as an integer; FormattedSpeed.speed() returns "%d mph"; and finally, PrintedSpeed.to_str() returns the full text of the message. We can call them “model, view, and controller,” but in reality they are just objects decorating each other. It’s still the same entity—the speed. But it gets more complex and intelligent by being decorated.
We don’t tear the concept of speed apart. The speed is the speed, no matter who works with it and where it is presented. It just gets new behavior from decorators. It grows, but never falls apart.
To summarize, Controller is a pure procedural component in the MVC trio, which turns Model into a passive data holder and View into a passive data renderer. The controller, the holder, the renderer … Is it really OOP?
Please, use syntax highlighting in your comments, to make them more readable.

Why yet another language? Because there are no object-oriented languages on the market that are really object-oriented, to my knowledge. Here are the things I think do not belong in a pure object-oriented language:
- static methods
- classes (only types and objects)
- implementation inheritance
- mutability
- NULL
- reflection
- constants
- type casting
- annotations
- flow control (
for,while,if, etc.)
And many other minor mistakes that Java and C++ are full of.
At the moment, we think that EO will compile into Java. Not into byte-code, but into .java files, later compilable to byte-code.
I really count on your contribution. Please submit your ideas as tickets and pull request to the yegor256/eo GitHub repo.
" /> Elegant Objects or in Esperanto): eolang.org. It’s open source and community driven: yegor256/eo GitHub repo. It’s still in very early draft form, but the direction is more or less clear: It has to be truly object-oriented, with no compromises. You’re welcome to join us.
Why yet another language? Because there are no object-oriented languages on the market that are really object-oriented, to my knowledge. Here are the things I think do not belong in a pure object-oriented language:
- static methods
- classes (only types and objects)
- implementation inheritance
- mutability
- NULL
- reflection
- constants
- type casting
- annotations
- flow control (
for,while,if, etc.)
And many other minor mistakes that Java and C++ are full of.
At the moment, we think that EO will compile into Java. Not into byte-code, but into .java files, later compilable to byte-code.
I really count on your contribution. Please submit your ideas as tickets and pull request to the yegor256/eo GitHub repo.
"/>
https://www.yegor256.com/2016/11/29/eolang.html
EO
- Tallinn, Estonia
- Yegor Bugayenko
- comments
It’s time to do it! We’ve started work on a new programming language. Its name is EO (as in Elegant Objects or in Esperanto): eolang.org. It’s open source and community driven: yegor256/eo GitHub repo. It’s still in very early draft form, but the direction is more or less clear: It has to be truly object-oriented, with no compromises. You’re welcome to join us.

Why yet another language? Because there are no object-oriented languages on the market that are really object-oriented, to my knowledge. Here are the things I think do not belong in a pure object-oriented language:
- static methods
- classes (only types and objects)
- implementation inheritance
- mutability
- NULL
- reflection
- constants
- type casting
- annotations
- flow control (
for,while,if, etc.)
And many other minor mistakes that Java and C++ are full of.
At the moment, we think that EO will compile into Java. Not into byte-code, but into .java files, later compilable to byte-code.
I really count on your contribution. Please submit your ideas as tickets and pull request to the yegor256/eo GitHub repo.
It’s time to do it! We’ve started work on a new programming language. Its name is EO (as in Elegant Objects or in Esperanto): eolang.org. It’s open source and community driven: yegor256/eo GitHub repo. It’s still in very early draft form, but the direction is more or less clear: It has to be truly object-oriented, with no compromises. You’re welcome to join us.

Why yet another language? Because there are no object-oriented languages on the market that are really object-oriented, to my knowledge. Here are the things I think do not belong in a pure object-oriented language:
- static methods
- classes (only types and objects)
- implementation inheritance
- mutability
- NULL
- reflection
- constants
- type casting
- annotations
- flow control (
for,while,if, etc.)
And many other minor mistakes that Java and C++ are full of.
At the moment, we think that EO will compile into Java. Not into byte-code, but into .java files, later compilable to byte-code.
I really count on your contribution. Please submit your ideas as tickets and pull request to the yegor256/eo GitHub repo.
Please, use syntax highlighting in your comments, to make them more readable.

This is what naked data is (C code):
int t;
t = 85;
printf("The temperature is %d F", t);Here t is the data, which is publicly accessible by the code around it. Anyone can modify it or read it.
Why is that bad? For one reason: tight and hidden coupling.
The code around t inevitably makes a lot of assumptions about the data. For example, both lines after int t decided that the temperature is in Fahrenheit. At the moment of writing, this may be true, but this assumption couples the code with the data. If tomorrow we change t to Celsius, the code won’t know about this change. That’s why I call this coupling hidden.
If we change the type of t from int to, say, double, the printf line won’t print anything after the decimal point. Again, the coupling is there, but it’s hidden. Later on, we simply won’t be able to find all the places in our code where we made these or other assumptions about t.
This will seriously affect maintainability.
And this is not a solution, as you can imagine (Java now):
class Temperature {
private int t;
public int getT() { return this.t; }
public void setT(int t) { this.t = t; }
}It looks like an object, but the data is still naked. Anyone can retrieve t from the object and decide whether it’s Fahrenheit or Celsius, whether it has digits after the dot or not, etc. This is not encapsulation yet!
The only way to encapsulate t is to make sure nobody can touch it either directly or by retrieving it from an object. How do we do that? Just stop exposing data and start exposing functionality. Here is how, for example:
class Temperature {
private int t;
public String toString() {
return String.format("%d F", this.t);
}
}We don’t allow anyone to retrieve t anymore. All they can do is convert temperature to text. If and when we decide to change t to Celsius, we will do it just once and in one place: in the class Temperature.
If we need other functions in the future, like math operations or conversion to Celsius, we add more methods to class Temperature. But we never let anyone touch or know about t.
This idea is close to “printers instead of getters,” which we discussed earlier, though from a much wider perspective. Here I’m saying that any data elements that escape objects are naked and lead to maintainability problems.
The question is how we can work entirely without naked data, right? Eventually we have to let objects exchange data, don’t we? Yes, that’s true. But not entirely. I’ll explain that in my next post.
" />
This is what naked data is (C code):
int t;
t = 85;
printf("The temperature is %d F", t);Here t is the data, which is publicly accessible by the code around it. Anyone can modify it or read it.
Why is that bad? For one reason: tight and hidden coupling.
The code around t inevitably makes a lot of assumptions about the data. For example, both lines after int t decided that the temperature is in Fahrenheit. At the moment of writing, this may be true, but this assumption couples the code with the data. If tomorrow we change t to Celsius, the code won’t know about this change. That’s why I call this coupling hidden.
If we change the type of t from int to, say, double, the printf line won’t print anything after the decimal point. Again, the coupling is there, but it’s hidden. Later on, we simply won’t be able to find all the places in our code where we made these or other assumptions about t.
This will seriously affect maintainability.
And this is not a solution, as you can imagine (Java now):
class Temperature {
private int t;
public int getT() { return this.t; }
public void setT(int t) { this.t = t; }
}It looks like an object, but the data is still naked. Anyone can retrieve t from the object and decide whether it’s Fahrenheit or Celsius, whether it has digits after the dot or not, etc. This is not encapsulation yet!
The only way to encapsulate t is to make sure nobody can touch it either directly or by retrieving it from an object. How do we do that? Just stop exposing data and start exposing functionality. Here is how, for example:
class Temperature {
private int t;
public String toString() {
return String.format("%d F", this.t);
}
}We don’t allow anyone to retrieve t anymore. All they can do is convert temperature to text. If and when we decide to change t to Celsius, we will do it just once and in one place: in the class Temperature.
If we need other functions in the future, like math operations or conversion to Celsius, we add more methods to class Temperature. But we never let anyone touch or know about t.
This idea is close to “printers instead of getters,” which we discussed earlier, though from a much wider perspective. Here I’m saying that any data elements that escape objects are naked and lead to maintainability problems.
The question is how we can work entirely without naked data, right? Eventually we have to let objects exchange data, don’t we? Yes, that’s true. But not entirely. I’ll explain that in my next post.
"/>
https://www.yegor256.com/2016/11/21/naked-data.html
Encapsulation Covers Up Naked Data
- Moscow, Russia
- Yegor Bugayenko
- comments
Encapsulation is the core principle of object-oriented programming that makes objects solid, cohesive, trustworthy, etc. But what exactly is encapsulation? Does it only protect against access to private attributes from outside an object? I think it’s much more. Encapsulation leads to the absence of naked data on all levels and in all forms.

This is what naked data is (C code):
int t;
t = 85;
printf("The temperature is %d F", t);Here t is the data, which is publicly accessible by the code around it. Anyone can modify it or read it.
Why is that bad? For one reason: tight and hidden coupling.
The code around t inevitably makes a lot of assumptions about the data. For example, both lines after int t decided that the temperature is in Fahrenheit. At the moment of writing, this may be true, but this assumption couples the code with the data. If tomorrow we change t to Celsius, the code won’t know about this change. That’s why I call this coupling hidden.
If we change the type of t from int to, say, double, the printf line won’t print anything after the decimal point. Again, the coupling is there, but it’s hidden. Later on, we simply won’t be able to find all the places in our code where we made these or other assumptions about t.
This will seriously affect maintainability.
And this is not a solution, as you can imagine (Java now):
class Temperature {
private int t;
public int getT() { return this.t; }
public void setT(int t) { this.t = t; }
}It looks like an object, but the data is still naked. Anyone can retrieve t from the object and decide whether it’s Fahrenheit or Celsius, whether it has digits after the dot or not, etc. This is not encapsulation yet!
The only way to encapsulate t is to make sure nobody can touch it either directly or by retrieving it from an object. How do we do that? Just stop exposing data and start exposing functionality. Here is how, for example:
class Temperature {
private int t;
public String toString() {
return String.format("%d F", this.t);
}
}We don’t allow anyone to retrieve t anymore. All they can do is convert temperature to text. If and when we decide to change t to Celsius, we will do it just once and in one place: in the class Temperature.
If we need other functions in the future, like math operations or conversion to Celsius, we add more methods to class Temperature. But we never let anyone touch or know about t.
This idea is close to “printers instead of getters,” which we discussed earlier, though from a much wider perspective. Here I’m saying that any data elements that escape objects are naked and lead to maintainability problems.
The question is how we can work entirely without naked data, right? Eventually we have to let objects exchange data, don’t we? Yes, that’s true. But not entirely. I’ll explain that in my next post.
Encapsulation is the core principle of object-oriented programming that makes objects solid, cohesive, trustworthy, etc. But what exactly is encapsulation? Does it only protect against access to private attributes from outside an object? I think it’s much more. Encapsulation leads to the absence of naked data on all levels and in all forms.

This is what naked data is (C code):
int t;
t = 85;
printf("The temperature is %d F", t);Here t is the data, which is publicly accessible by the code around it. Anyone can modify it or read it.
Why is that bad? For one reason: tight and hidden coupling.
The code around t inevitably makes a lot of assumptions about the data. For example, both lines after int t decided that the temperature is in Fahrenheit. At the moment of writing, this may be true, but this assumption couples the code with the data. If tomorrow we change t to Celsius, the code won’t know about this change. That’s why I call this coupling hidden.
If we change the type of t from int to, say, double, the printf line won’t print anything after the decimal point. Again, the coupling is there, but it’s hidden. Later on, we simply won’t be able to find all the places in our code where we made these or other assumptions about t.
This will seriously affect maintainability.
And this is not a solution, as you can imagine (Java now):
class Temperature {
private int t;
public int getT() { return this.t; }
public void setT(int t) { this.t = t; }
}It looks like an object, but the data is still naked. Anyone can retrieve t from the object and decide whether it’s Fahrenheit or Celsius, whether it has digits after the dot or not, etc. This is not encapsulation yet!
The only way to encapsulate t is to make sure nobody can touch it either directly or by retrieving it from an object. How do we do that? Just stop exposing data and start exposing functionality. Here is how, for example:
class Temperature {
private int t;
public String toString() {
return String.format("%d F", this.t);
}
}We don’t allow anyone to retrieve t anymore. All they can do is convert temperature to text. If and when we decide to change t to Celsius, we will do it just once and in one place: in the class Temperature.
If we need other functions in the future, like math operations or conversion to Celsius, we add more methods to class Temperature. But we never let anyone touch or know about t.
This idea is close to “printers instead of getters,” which we discussed earlier, though from a much wider perspective. Here I’m saying that any data elements that escape objects are naked and lead to maintainability problems.
The question is how we can work entirely without naked data, right? Eventually we have to let objects exchange data, don’t we? Yes, that’s true. But not entirely. I’ll explain that in my next post.
Please, use syntax highlighting in your comments, to make them more readable.

Here is a prototype.
Let’s say we have only types and objects. First, we define a type:
type Book {
void print();
}Then we create an object (pay attention; we don’t “instantiate”):
Book b1 = create Book("Object Thinking") {
String title;
Book(String t) {
this.title = t;
}
public void print() {
print("My title: " + this.title);
}
}Then we create another object, which will behave similarly to the one we already have but with different constructor arguments. We copy an existing one:
Book b2 = copy b1("Elegant Objects");Libraries will deliver us objects, which we can copy.
That’s it.
No implementation inheritance and no static methods, of course. Only subtyping.
Why not?
" /> interviewed David West, the author of the Object Thinking book, a few weeks ago, and he said that classes were not meant to be in object-oriented programming at all. He actually said that earlier; I just didn’t understand him then. The more I’ve thought about this, the more it appears obvious that we indeed do not need classes.
Here is a prototype.
Let’s say we have only types and objects. First, we define a type:
type Book {
void print();
}Then we create an object (pay attention; we don’t “instantiate”):
Book b1 = create Book("Object Thinking") {
String title;
Book(String t) {
this.title = t;
}
public void print() {
print("My title: " + this.title);
}
}Then we create another object, which will behave similarly to the one we already have but with different constructor arguments. We copy an existing one:
Book b2 = copy b1("Elegant Objects");Libraries will deliver us objects, which we can copy.
That’s it.
No implementation inheritance and no static methods, of course. Only subtyping.
Why not?
"/>
https://www.yegor256.com/2016/09/20/oop-without-classes.html
OOP Without Classes?
- Palo Alto, CA
- Yegor Bugayenko
- comments
I interviewed David West, the author of the Object Thinking book, a few weeks ago, and he said that classes were not meant to be in object-oriented programming at all. He actually said that earlier; I just didn’t understand him then. The more I’ve thought about this, the more it appears obvious that we indeed do not need classes.

Here is a prototype.
Let’s say we have only types and objects. First, we define a type:
type Book {
void print();
}Then we create an object (pay attention; we don’t “instantiate”):
Book b1 = create Book("Object Thinking") {
String title;
Book(String t) {
this.title = t;
}
public void print() {
print("My title: " + this.title);
}
}Then we create another object, which will behave similarly to the one we already have but with different constructor arguments. We copy an existing one:
Book b2 = copy b1("Elegant Objects");Libraries will deliver us objects, which we can copy.
That’s it.
No implementation inheritance and no static methods, of course. Only subtyping.
Why not?
I interviewed David West, the author of the Object Thinking book, a few weeks ago, and he said that classes were not meant to be in object-oriented programming at all. He actually said that earlier; I just didn’t understand him then. The more I’ve thought about this, the more it appears obvious that we indeed do not need classes.

Here is a prototype.
Let’s say we have only types and objects. First, we define a type:
type Book {
void print();
}Then we create an object (pay attention; we don’t “instantiate”):
Book b1 = create Book("Object Thinking") {
String title;
Book(String t) {
this.title = t;
}
public void print() {
print("My title: " + this.title);
}
}Then we create another object, which will behave similarly to the one we already have but with different constructor arguments. We copy an existing one:
Book b2 = copy b1("Elegant Objects");Libraries will deliver us objects, which we can copy.
That’s it.
No implementation inheritance and no static methods, of course. Only subtyping.
Why not?
Please, use syntax highlighting in your comments, to make them more readable.
extends keyword in Java, for example.
I think we should. And I think I know the reason why.
It’s not because we introduce unnecessary coupling, as Allen Holub said in his Why extends is evil article. He was definitely right, but I believe it’s not the root cause of the problem.
“Inherit,” as an English verb, has a number of meanings. This one is what inheritance inventors in Simula had in mind, I guess: “Derive (a quality, characteristic, or predisposition) genetically from one’s parents or ancestors.”
Deriving a characteristic from another object is a great idea, and it’s called subtyping. It perfectly fits into OOP and actually enables polymorphism: An object of class Article inherits all characteristics of objects in class Manuscript and adds its own. For example, it inherits an ability to print itself and adds an ability to submit itself to a conference:
interface Manuscript {
void print(Console console);
}
interface Article extends Manuscript {
void submit(Conference cnf);
}This is subtyping, and it’s a perfect technique; whenever a manuscript is required, we can provide an article and nobody will notice anything, because type Article is a subtype of type Manuscript (Liskov substitution principle).
But what does copying methods and attributes from a parent class to a child one have to do with “deriving characteristics?” Implementation inheritance is exactly that—copying—and it has nothing to do with the meaning of the word “inherit” I quoted above.
Implementation inheritance is much closer to a different meaning: “Receive (money, property, or a title) as an heir at the death of the previous holder.” Who is dead, you ask? An object is dead if it allows other objects to inherit its encapsulated code and data. This is implementation inheritance:
class Manuscript {
protected String body;
void print(Console console) {
console.println(this.body);
}
}
class Article extends Manuscript {
void submit(Conference cnf) {
cnf.send(this.body);
}
}Class Article copies method print() and attribute body from class Manuscript, as if it’s not a living organism, but rather a dead one from which we can inherit its parts, “money, properties, or a title.”

Implementation inheritance was created as a mechanism for code reuse, and it doesn’t fit into OOP at all. Yes, it may look convenient in the beginning, but it is absolutely wrong in terms of object thinking. Just like getters and setters, implementation inheritance turns objects into containers with data and procedures. Of course, it’s convenient to copy some of those data and procedures to a new object in order to avoid code duplication. But this is not what objects are about. They are not dead; they are alive!
Don’t kill them with inheritance :)
Thus, I think inheritance is bad because it is a procedural technique for code reuse. It comes as no surprise that it introduces all the problems people have been talking about for years. Because it is procedural! That’s why it doesn’t fit into object-oriented programming.
By the way, we discussed this problem in our Gitter chat (it’s dead already) a week ago, and that’s when it became obvious to me what exactly is wrong with inheritance. Take a look at our discussion there.
" /> inheritance is bad and that composition over inheritance is a good idea, but do we really understand why? Inextends keyword in Java, for example.
I think we should. And I think I know the reason why.
It’s not because we introduce unnecessary coupling, as Allen Holub said in his Why extends is evil article. He was definitely right, but I believe it’s not the root cause of the problem.
“Inherit,” as an English verb, has a number of meanings. This one is what inheritance inventors in Simula had in mind, I guess: “Derive (a quality, characteristic, or predisposition) genetically from one’s parents or ancestors.”
Deriving a characteristic from another object is a great idea, and it’s called subtyping. It perfectly fits into OOP and actually enables polymorphism: An object of class Article inherits all characteristics of objects in class Manuscript and adds its own. For example, it inherits an ability to print itself and adds an ability to submit itself to a conference:
interface Manuscript {
void print(Console console);
}
interface Article extends Manuscript {
void submit(Conference cnf);
}This is subtyping, and it’s a perfect technique; whenever a manuscript is required, we can provide an article and nobody will notice anything, because type Article is a subtype of type Manuscript (Liskov substitution principle).
But what does copying methods and attributes from a parent class to a child one have to do with “deriving characteristics?” Implementation inheritance is exactly that—copying—and it has nothing to do with the meaning of the word “inherit” I quoted above.
Implementation inheritance is much closer to a different meaning: “Receive (money, property, or a title) as an heir at the death of the previous holder.” Who is dead, you ask? An object is dead if it allows other objects to inherit its encapsulated code and data. This is implementation inheritance:
class Manuscript {
protected String body;
void print(Console console) {
console.println(this.body);
}
}
class Article extends Manuscript {
void submit(Conference cnf) {
cnf.send(this.body);
}
}Class Article copies method print() and attribute body from class Manuscript, as if it’s not a living organism, but rather a dead one from which we can inherit its parts, “money, properties, or a title.”

Implementation inheritance was created as a mechanism for code reuse, and it doesn’t fit into OOP at all. Yes, it may look convenient in the beginning, but it is absolutely wrong in terms of object thinking. Just like getters and setters, implementation inheritance turns objects into containers with data and procedures. Of course, it’s convenient to copy some of those data and procedures to a new object in order to avoid code duplication. But this is not what objects are about. They are not dead; they are alive!
Don’t kill them with inheritance :)
Thus, I think inheritance is bad because it is a procedural technique for code reuse. It comes as no surprise that it introduces all the problems people have been talking about for years. Because it is procedural! That’s why it doesn’t fit into object-oriented programming.
By the way, we discussed this problem in our Gitter chat (it’s dead already) a week ago, and that’s when it became obvious to me what exactly is wrong with inheritance. Take a look at our discussion there.
"/>
https://www.yegor256.com/2016/09/13/inheritance-is-procedural.html
Inheritance Is a Procedural Technique for Code Reuse
- Palo Alto, CA
- Yegor Bugayenko
- comments
- Translated:
- Polish
- add yours!
We all know that inheritance is bad and that composition over inheritance is a good idea, but do we really understand why? In most all articles I’ve found addressing this subject, authors have said that inheritance may be harmful to your code, so it’s better not to use it. This “better” part is what bothers me; does it mean that sometimes inheritance makes sense? I interviewed David West (the author of Object Thinking, my favorite book about OOP) a few weeks ago, and he said that inheritance should not exist in object-oriented programming at all (full video). Maybe Dr. West is right and we should totally forget extends keyword in Java, for example.

I think we should. And I think I know the reason why.
It’s not because we introduce unnecessary coupling, as Allen Holub said in his Why extends is evil article. He was definitely right, but I believe it’s not the root cause of the problem.
“Inherit,” as an English verb, has a number of meanings. This one is what inheritance inventors in Simula had in mind, I guess: “Derive (a quality, characteristic, or predisposition) genetically from one’s parents or ancestors.”
Deriving a characteristic from another object is a great idea, and it’s called subtyping. It perfectly fits into OOP and actually enables polymorphism: An object of class Article inherits all characteristics of objects in class Manuscript and adds its own. For example, it inherits an ability to print itself and adds an ability to submit itself to a conference:
interface Manuscript {
void print(Console console);
}
interface Article extends Manuscript {
void submit(Conference cnf);
}This is subtyping, and it’s a perfect technique; whenever a manuscript is required, we can provide an article and nobody will notice anything, because type Article is a subtype of type Manuscript (Liskov substitution principle).
But what does copying methods and attributes from a parent class to a child one have to do with “deriving characteristics?” Implementation inheritance is exactly that—copying—and it has nothing to do with the meaning of the word “inherit” I quoted above.
Implementation inheritance is much closer to a different meaning: “Receive (money, property, or a title) as an heir at the death of the previous holder.” Who is dead, you ask? An object is dead if it allows other objects to inherit its encapsulated code and data. This is implementation inheritance:
class Manuscript {
protected String body;
void print(Console console) {
console.println(this.body);
}
}
class Article extends Manuscript {
void submit(Conference cnf) {
cnf.send(this.body);
}
}Class Article copies method print() and attribute body from class Manuscript, as if it’s not a living organism, but rather a dead one from which we can inherit its parts, “money, properties, or a title.”

Implementation inheritance was created as a mechanism for code reuse, and it doesn’t fit into OOP at all. Yes, it may look convenient in the beginning, but it is absolutely wrong in terms of object thinking. Just like getters and setters, implementation inheritance turns objects into containers with data and procedures. Of course, it’s convenient to copy some of those data and procedures to a new object in order to avoid code duplication. But this is not what objects are about. They are not dead; they are alive!
Don’t kill them with inheritance :)
Thus, I think inheritance is bad because it is a procedural technique for code reuse. It comes as no surprise that it introduces all the problems people have been talking about for years. Because it is procedural! That’s why it doesn’t fit into object-oriented programming.
By the way, we discussed this problem in our Gitter chat (it’s dead already) a week ago, and that’s when it became obvious to me what exactly is wrong with inheritance. Take a look at our discussion there.
We all know that inheritance is bad and that composition over inheritance is a good idea, but do we really understand why? In most all articles I’ve found addressing this subject, authors have said that inheritance may be harmful to your code, so it’s better not to use it. This “better” part is what bothers me; does it mean that sometimes inheritance makes sense? I interviewed David West (the author of Object Thinking, my favorite book about OOP) a few weeks ago, and he said that inheritance should not exist in object-oriented programming at all (full video). Maybe Dr. West is right and we should totally forget extends keyword in Java, for example.

I think we should. And I think I know the reason why.
It’s not because we introduce unnecessary coupling, as Allen Holub said in his Why extends is evil article. He was definitely right, but I believe it’s not the root cause of the problem.
“Inherit,” as an English verb, has a number of meanings. This one is what inheritance inventors in Simula had in mind, I guess: “Derive (a quality, characteristic, or predisposition) genetically from one’s parents or ancestors.”
Deriving a characteristic from another object is a great idea, and it’s called subtyping. It perfectly fits into OOP and actually enables polymorphism: An object of class Article inherits all characteristics of objects in class Manuscript and adds its own. For example, it inherits an ability to print itself and adds an ability to submit itself to a conference:
interface Manuscript {
void print(Console console);
}
interface Article extends Manuscript {
void submit(Conference cnf);
}This is subtyping, and it’s a perfect technique; whenever a manuscript is required, we can provide an article and nobody will notice anything, because type Article is a subtype of type Manuscript (Liskov substitution principle).
But what does copying methods and attributes from a parent class to a child one have to do with “deriving characteristics?” Implementation inheritance is exactly that—copying—and it has nothing to do with the meaning of the word “inherit” I quoted above.
Implementation inheritance is much closer to a different meaning: “Receive (money, property, or a title) as an heir at the death of the previous holder.” Who is dead, you ask? An object is dead if it allows other objects to inherit its encapsulated code and data. This is implementation inheritance:
class Manuscript {
protected String body;
void print(Console console) {
console.println(this.body);
}
}
class Article extends Manuscript {
void submit(Conference cnf) {
cnf.send(this.body);
}
}Class Article copies method print() and attribute body from class Manuscript, as if it’s not a living organism, but rather a dead one from which we can inherit its parts, “money, properties, or a title.”

Implementation inheritance was created as a mechanism for code reuse, and it doesn’t fit into OOP at all. Yes, it may look convenient in the beginning, but it is absolutely wrong in terms of object thinking. Just like getters and setters, implementation inheritance turns objects into containers with data and procedures. Of course, it’s convenient to copy some of those data and procedures to a new object in order to avoid code duplication. But this is not what objects are about. They are not dead; they are alive!
Don’t kill them with inheritance :)
Thus, I think inheritance is bad because it is a procedural technique for code reuse. It comes as no surprise that it introduces all the problems people have been talking about for years. Because it is procedural! That’s why it doesn’t fit into object-oriented programming.
By the way, we discussed this problem in our Gitter chat (it’s dead already) a week ago, and that’s when it became obvious to me what exactly is wrong with inheritance. Take a look at our discussion there.
Please, use syntax highlighting in your comments, to make them more readable.

As we agreed here, an object is a representative of someone else (some entity or entities, other object(s), data, memory, files, etc.). Let’s examine a number of objects that look exactly the same to us but represent different things, then analyze how immutable they are and why.
Constant
This is constant; it doesn’t allow any modifications to the encapsulated entity and always returns the same text (I’ve skipped constructors for the sake of brevity):
class Book {
private final String ttl;
Book rename(String title) {
return new Book(title);
}
String title() {
return this.ttl;
}
}This is what we usually have in mind when talking about immutable objects. Such a class is very close to a pure function, which means that no matter how many times we instantiate it with the same initial values, the result of title() will be the same.
Not a Constant
Check out this one:
class Book {
private final String ttl;
Book rename(String title) {
return new Book(title);
}
String title() {
return String.format(
"%s (as of %tR)", this.ttl, new Date()
);
}
}The object is still immutable, but it is not a pure function anymore because of the method title()—it returns different values if we call it multiple times with at least a one-minute interval. The object is immutable; it’s just not a constant anymore.
Represented Mutability
How about this one:
class Book {
private final Path path;
Book rename(String title) {
Files.write(
this.path,
title.getBytes(),
StandardOpenOption.CREATE
);
return this;
}
String title() {
return new String(
Files.readAllBytes(this.path)
);
}
}This immutable object keeps the book title in a file. It’s not a constant, because its method title() may return different values on every second call. Moreover, the represented entity (the file) is not a constant. We can’t say whether it’s mutable or immutable, as we don’t know how Files.write() is implemented. But we know for sure that it’s not a constant, because it accepts change requests.
Encapsulated Mutability
An immutable object may not only represent but even encapsulate a mutable one. Just like in the previous example, a mutable file was encapsulated. Even though it was represented by the immutable class Path, the real file on disk was mutable. We can do the same, but in memory:
class Book {
private final StringBuffer buffer;
Book rename(String title) {
this.buffer.setLength(0);
this.buffer.append(title);
return this;
}
String title() {
return this.buffer.toString();
}
}The object is still immutable. Is it thread-safe? No. Is it a constant? No. Is it immutable? Yes. Confused? You bet.
My point is that immutability is not binary; there are many forms of it. The most simple one is, of course, a constant. Constants are almost the same as pure functions in functional programming. But object-oriented programming allows us to take a few steps forward and give immutable objects more permissions and flexibility. In OOP, we may have many more forms of immutability.
What is common among all these examples is that our objects are loyal to the entities they encapsulate. There are no setters that could change them. All encapsulated objects are final.
This is the only quality that differentiates mutable objects from immutable ones. The latter are always loyal to the entities they encapsulate and represent. For all the rest … it depends.
" /> here, here, and here, but now it’s time to make another attempt. Actually, the more I think about it, the more I realize that immutability is not black or white—there are a few more gradients; let’s take a look.
As we agreed here, an object is a representative of someone else (some entity or entities, other object(s), data, memory, files, etc.). Let’s examine a number of objects that look exactly the same to us but represent different things, then analyze how immutable they are and why.
Constant
This is constant; it doesn’t allow any modifications to the encapsulated entity and always returns the same text (I’ve skipped constructors for the sake of brevity):
class Book {
private final String ttl;
Book rename(String title) {
return new Book(title);
}
String title() {
return this.ttl;
}
}This is what we usually have in mind when talking about immutable objects. Such a class is very close to a pure function, which means that no matter how many times we instantiate it with the same initial values, the result of title() will be the same.
Not a Constant
Check out this one:
class Book {
private final String ttl;
Book rename(String title) {
return new Book(title);
}
String title() {
return String.format(
"%s (as of %tR)", this.ttl, new Date()
);
}
}The object is still immutable, but it is not a pure function anymore because of the method title()—it returns different values if we call it multiple times with at least a one-minute interval. The object is immutable; it’s just not a constant anymore.
Represented Mutability
How about this one:
class Book {
private final Path path;
Book rename(String title) {
Files.write(
this.path,
title.getBytes(),
StandardOpenOption.CREATE
);
return this;
}
String title() {
return new String(
Files.readAllBytes(this.path)
);
}
}This immutable object keeps the book title in a file. It’s not a constant, because its method title() may return different values on every second call. Moreover, the represented entity (the file) is not a constant. We can’t say whether it’s mutable or immutable, as we don’t know how Files.write() is implemented. But we know for sure that it’s not a constant, because it accepts change requests.
Encapsulated Mutability
An immutable object may not only represent but even encapsulate a mutable one. Just like in the previous example, a mutable file was encapsulated. Even though it was represented by the immutable class Path, the real file on disk was mutable. We can do the same, but in memory:
class Book {
private final StringBuffer buffer;
Book rename(String title) {
this.buffer.setLength(0);
this.buffer.append(title);
return this;
}
String title() {
return this.buffer.toString();
}
}The object is still immutable. Is it thread-safe? No. Is it a constant? No. Is it immutable? Yes. Confused? You bet.
My point is that immutability is not binary; there are many forms of it. The most simple one is, of course, a constant. Constants are almost the same as pure functions in functional programming. But object-oriented programming allows us to take a few steps forward and give immutable objects more permissions and flexibility. In OOP, we may have many more forms of immutability.
What is common among all these examples is that our objects are loyal to the entities they encapsulate. There are no setters that could change them. All encapsulated objects are final.
This is the only quality that differentiates mutable objects from immutable ones. The latter are always loyal to the entities they encapsulate and represent. For all the rest … it depends.
"/>
https://www.yegor256.com/2016/09/07/gradients-of-immutability.html
Gradients of Immutability
- Palo Alto, CA
- Yegor Bugayenko
- comments
Good objects are immutable, but not necessarily constants. I tried to explain it here, here, and here, but now it’s time to make another attempt. Actually, the more I think about it, the more I realize that immutability is not black or white—there are a few more gradients; let’s take a look.

As we agreed here, an object is a representative of someone else (some entity or entities, other object(s), data, memory, files, etc.). Let’s examine a number of objects that look exactly the same to us but represent different things, then analyze how immutable they are and why.
Constant
This is constant; it doesn’t allow any modifications to the encapsulated entity and always returns the same text (I’ve skipped constructors for the sake of brevity):
class Book {
private final String ttl;
Book rename(String title) {
return new Book(title);
}
String title() {
return this.ttl;
}
}This is what we usually have in mind when talking about immutable objects. Such a class is very close to a pure function, which means that no matter how many times we instantiate it with the same initial values, the result of title() will be the same.
Not a Constant
Check out this one:
class Book {
private final String ttl;
Book rename(String title) {
return new Book(title);
}
String title() {
return String.format(
"%s (as of %tR)", this.ttl, new Date()
);
}
}The object is still immutable, but it is not a pure function anymore because of the method title()—it returns different values if we call it multiple times with at least a one-minute interval. The object is immutable; it’s just not a constant anymore.
Represented Mutability
How about this one:
class Book {
private final Path path;
Book rename(String title) {
Files.write(
this.path,
title.getBytes(),
StandardOpenOption.CREATE
);
return this;
}
String title() {
return new String(
Files.readAllBytes(this.path)
);
}
}This immutable object keeps the book title in a file. It’s not a constant, because its method title() may return different values on every second call. Moreover, the represented entity (the file) is not a constant. We can’t say whether it’s mutable or immutable, as we don’t know how Files.write() is implemented. But we know for sure that it’s not a constant, because it accepts change requests.
Encapsulated Mutability
An immutable object may not only represent but even encapsulate a mutable one. Just like in the previous example, a mutable file was encapsulated. Even though it was represented by the immutable class Path, the real file on disk was mutable. We can do the same, but in memory:
class Book {
private final StringBuffer buffer;
Book rename(String title) {
this.buffer.setLength(0);
this.buffer.append(title);
return this;
}
String title() {
return this.buffer.toString();
}
}The object is still immutable. Is it thread-safe? No. Is it a constant? No. Is it immutable? Yes. Confused? You bet.
My point is that immutability is not binary; there are many forms of it. The most simple one is, of course, a constant. Constants are almost the same as pure functions in functional programming. But object-oriented programming allows us to take a few steps forward and give immutable objects more permissions and flexibility. In OOP, we may have many more forms of immutability.
What is common among all these examples is that our objects are loyal to the entities they encapsulate. There are no setters that could change them. All encapsulated objects are final.
This is the only quality that differentiates mutable objects from immutable ones. The latter are always loyal to the entities they encapsulate and represent. For all the rest … it depends.
Good objects are immutable, but not necessarily constants. I tried to explain it here, here, and here, but now it’s time to make another attempt. Actually, the more I think about it, the more I realize that immutability is not black or white—there are a few more gradients; let’s take a look.

As we agreed here, an object is a representative of someone else (some entity or entities, other object(s), data, memory, files, etc.). Let’s examine a number of objects that look exactly the same to us but represent different things, then analyze how immutable they are and why.
Constant
This is constant; it doesn’t allow any modifications to the encapsulated entity and always returns the same text (I’ve skipped constructors for the sake of brevity):
class Book {
private final String ttl;
Book rename(String title) {
return new Book(title);
}
String title() {
return this.ttl;
}
}This is what we usually have in mind when talking about immutable objects. Such a class is very close to a pure function, which means that no matter how many times we instantiate it with the same initial values, the result of title() will be the same.
Not a Constant
Check out this one:
class Book {
private final String ttl;
Book rename(String title) {
return new Book(title);
}
String title() {
return String.format(
"%s (as of %tR)", this.ttl, new Date()
);
}
}The object is still immutable, but it is not a pure function anymore because of the method title()—it returns different values if we call it multiple times with at least a one-minute interval. The object is immutable; it’s just not a constant anymore.
Represented Mutability
How about this one:
class Book {
private final Path path;
Book rename(String title) {
Files.write(
this.path,
title.getBytes(),
StandardOpenOption.CREATE
);
return this;
}
String title() {
return new String(
Files.readAllBytes(this.path)
);
}
}This immutable object keeps the book title in a file. It’s not a constant, because its method title() may return different values on every second call. Moreover, the represented entity (the file) is not a constant. We can’t say whether it’s mutable or immutable, as we don’t know how Files.write() is implemented. But we know for sure that it’s not a constant, because it accepts change requests.
Encapsulated Mutability
An immutable object may not only represent but even encapsulate a mutable one. Just like in the previous example, a mutable file was encapsulated. Even though it was represented by the immutable class Path, the real file on disk was mutable. We can do the same, but in memory:
class Book {
private final StringBuffer buffer;
Book rename(String title) {
this.buffer.setLength(0);
this.buffer.append(title);
return this;
}
String title() {
return this.buffer.toString();
}
}The object is still immutable. Is it thread-safe? No. Is it a constant? No. Is it immutable? Yes. Confused? You bet.
My point is that immutability is not binary; there are many forms of it. The most simple one is, of course, a constant. Constants are almost the same as pure functions in functional programming. But object-oriented programming allows us to take a few steps forward and give immutable objects more permissions and flexibility. In OOP, we may have many more forms of immutability.
What is common among all these examples is that our objects are loyal to the entities they encapsulate. There are no setters that could change them. All encapsulated objects are final.
This is the only quality that differentiates mutable objects from immutable ones. The latter are always loyal to the entities they encapsulate and represent. For all the rest … it depends.
If you like this article, you will definitely like these very relevant posts too:
Objects Should Be Immutable
The article gives arguments about why classes/objects in object-oriented programming have to be immutable, i.e. never modify their encapsulated state
How an Immutable Object Can Have State and Behavior?
Object state and behavior are two very different things, and confusing the two often leads to incorrect design.
Immutable Objects Are Not Dumb
Immutable objects are not the same as passive data structures without setters, despite a very common mis-belief.
Please, use syntax highlighting in your comments, to make them more readable.

Let’s say this is our code (it is Ruby):
class Log
def initialize(path)
@file = IO.new(path, 'a')
end
def put(text)
line = Time.now.strftime("%d/%m/%Y %H:%M ") + text
@file.puts line
end
endObviously, objects of this class are doing too much. They save log lines to the file and also format them—an obvious violation of a famous single responsibility principle. An object of this class would be responsible for too many things. We have to extract some functionality out of it and put that into another object(s). We have to decompose its responsibility. No matter where we put it, this is how the Log class will look after the extraction:
class Log
def initialize(path)
@file = IO.new(path, 'a')
end
def put(line)
@file.puts line
end
endNow it only saves lines to the file, which is perfect. The class is cohesive and small. Let’s make an instance of it:
log = Log.new('/tmp/log.txt')Next, where do we put the lines with formatting functionality that were just extracted? There are two approaches to decompose responsibility: horizontal and vertical. This one is horizontal:
class Line
def initialize(text)
@line = text
end
def to_s
Time.now.strftime("%d/%m/%Y %H:%M ") + text
end
endIn order to use Log and Line together, we have to do this:
log.put(Line.new("Hello, world"))See why it’s horizontal? Because this script sees them both. They both are on the same level of visibility. We will always have to communicate with both of them when we want to log a line. Both objects of Log and Line are in front of us. We have to deal with two classes in order to log a line:
To the contrary, this decomposition of responsibility is vertical:
class TimedLog
def initialize(log)
@origin = log
end
def put(text)
@origin.put(Time.now.strftime("%d/%m/%Y %H:%M ") + text)
end
endClass TimedLog is a decorator, and this is how we use them together:
log = TimedLog.new(log)Now, we just put a line in the log:
log.put("Hello, world")The responsibility is decomposed vertically. We still have one entry point into the log object, but the object “consists” of two objects, one wrapped into another:
In general, I think horizontal decomposition of responsibility is a bad idea, while vertical is a much better one. That’s because a vertically decomposed object decreases complexity, while a horizontally decomposed one actually makes things more complex because its clients have to deal with more dependencies and more points of contact.
" />
Let’s say this is our code (it is Ruby):
class Log
def initialize(path)
@file = IO.new(path, 'a')
end
def put(text)
line = Time.now.strftime("%d/%m/%Y %H:%M ") + text
@file.puts line
end
endObviously, objects of this class are doing too much. They save log lines to the file and also format them—an obvious violation of a famous single responsibility principle. An object of this class would be responsible for too many things. We have to extract some functionality out of it and put that into another object(s). We have to decompose its responsibility. No matter where we put it, this is how the Log class will look after the extraction:
class Log
def initialize(path)
@file = IO.new(path, 'a')
end
def put(line)
@file.puts line
end
endNow it only saves lines to the file, which is perfect. The class is cohesive and small. Let’s make an instance of it:
log = Log.new('/tmp/log.txt')Next, where do we put the lines with formatting functionality that were just extracted? There are two approaches to decompose responsibility: horizontal and vertical. This one is horizontal:
class Line
def initialize(text)
@line = text
end
def to_s
Time.now.strftime("%d/%m/%Y %H:%M ") + text
end
endIn order to use Log and Line together, we have to do this:
log.put(Line.new("Hello, world"))See why it’s horizontal? Because this script sees them both. They both are on the same level of visibility. We will always have to communicate with both of them when we want to log a line. Both objects of Log and Line are in front of us. We have to deal with two classes in order to log a line:
To the contrary, this decomposition of responsibility is vertical:
class TimedLog
def initialize(log)
@origin = log
end
def put(text)
@origin.put(Time.now.strftime("%d/%m/%Y %H:%M ") + text)
end
endClass TimedLog is a decorator, and this is how we use them together:
log = TimedLog.new(log)Now, we just put a line in the log:
log.put("Hello, world")The responsibility is decomposed vertically. We still have one entry point into the log object, but the object “consists” of two objects, one wrapped into another:
In general, I think horizontal decomposition of responsibility is a bad idea, while vertical is a much better one. That’s because a vertically decomposed object decreases complexity, while a horizontally decomposed one actually makes things more complex because its clients have to deal with more dependencies and more points of contact.
"/>
https://www.yegor256.com/2016/08/30/decomposition-of-responsibility.html
Vertical vs. Horizontal Decomposition of Responsibility
- Palo Alto, CA
- Yegor Bugayenko
- comments
Objects responsible for too many things are a problem. Because their complexity is high, they are difficult to maintain and extend. Decomposition of responsibility is what we do in order to break these overly complex objects into smaller ones. I see two types of this refactoring operation: vertical and horizontal. And I believe the former is better than the latter.

Let’s say this is our code (it is Ruby):
class Log
def initialize(path)
@file = IO.new(path, 'a')
end
def put(text)
line = Time.now.strftime("%d/%m/%Y %H:%M ") + text
@file.puts line
end
endObviously, objects of this class are doing too much. They save log lines to the file and also format them—an obvious violation of a famous single responsibility principle. An object of this class would be responsible for too many things. We have to extract some functionality out of it and put that into another object(s). We have to decompose its responsibility. No matter where we put it, this is how the Log class will look after the extraction:
class Log
def initialize(path)
@file = IO.new(path, 'a')
end
def put(line)
@file.puts line
end
endNow it only saves lines to the file, which is perfect. The class is cohesive and small. Let’s make an instance of it:
log = Log.new('/tmp/log.txt')Next, where do we put the lines with formatting functionality that were just extracted? There are two approaches to decompose responsibility: horizontal and vertical. This one is horizontal:
class Line
def initialize(text)
@line = text
end
def to_s
Time.now.strftime("%d/%m/%Y %H:%M ") + text
end
endIn order to use Log and Line together, we have to do this:
log.put(Line.new("Hello, world"))See why it’s horizontal? Because this script sees them both. They both are on the same level of visibility. We will always have to communicate with both of them when we want to log a line. Both objects of Log and Line are in front of us. We have to deal with two classes in order to log a line:
To the contrary, this decomposition of responsibility is vertical:
class TimedLog
def initialize(log)
@origin = log
end
def put(text)
@origin.put(Time.now.strftime("%d/%m/%Y %H:%M ") + text)
end
endClass TimedLog is a decorator, and this is how we use them together:
log = TimedLog.new(log)Now, we just put a line in the log:
log.put("Hello, world")The responsibility is decomposed vertically. We still have one entry point into the log object, but the object “consists” of two objects, one wrapped into another:
In general, I think horizontal decomposition of responsibility is a bad idea, while vertical is a much better one. That’s because a vertically decomposed object decreases complexity, while a horizontally decomposed one actually makes things more complex because its clients have to deal with more dependencies and more points of contact.
Objects responsible for too many things are a problem. Because their complexity is high, they are difficult to maintain and extend. Decomposition of responsibility is what we do in order to break these overly complex objects into smaller ones. I see two types of this refactoring operation: vertical and horizontal. And I believe the former is better than the latter.

Let’s say this is our code (it is Ruby):
class Log
def initialize(path)
@file = IO.new(path, 'a')
end
def put(text)
line = Time.now.strftime("%d/%m/%Y %H:%M ") + text
@file.puts line
end
endObviously, objects of this class are doing too much. They save log lines to the file and also format them—an obvious violation of a famous single responsibility principle. An object of this class would be responsible for too many things. We have to extract some functionality out of it and put that into another object(s). We have to decompose its responsibility. No matter where we put it, this is how the Log class will look after the extraction:
class Log
def initialize(path)
@file = IO.new(path, 'a')
end
def put(line)
@file.puts line
end
endNow it only saves lines to the file, which is perfect. The class is cohesive and small. Let’s make an instance of it:
log = Log.new('/tmp/log.txt')Next, where do we put the lines with formatting functionality that were just extracted? There are two approaches to decompose responsibility: horizontal and vertical. This one is horizontal:
class Line
def initialize(text)
@line = text
end
def to_s
Time.now.strftime("%d/%m/%Y %H:%M ") + text
end
endIn order to use Log and Line together, we have to do this:
log.put(Line.new("Hello, world"))See why it’s horizontal? Because this script sees them both. They both are on the same level of visibility. We will always have to communicate with both of them when we want to log a line. Both objects of Log and Line are in front of us. We have to deal with two classes in order to log a line:
To the contrary, this decomposition of responsibility is vertical:
class TimedLog
def initialize(log)
@origin = log
end
def put(text)
@origin.put(Time.now.strftime("%d/%m/%Y %H:%M ") + text)
end
endClass TimedLog is a decorator, and this is how we use them together:
log = TimedLog.new(log)Now, we just put a line in the log:
log.put("Hello, world")The responsibility is decomposed vertically. We still have one entry point into the log object, but the object “consists” of two objects, one wrapped into another:
In general, I think horizontal decomposition of responsibility is a bad idea, while vertical is a much better one. That’s because a vertically decomposed object decreases complexity, while a horizontally decomposed one actually makes things more complex because its clients have to deal with more dependencies and more points of contact.
Please, use syntax highlighting in your comments, to make them more readable.

The list of quotes is sorted in chronological order, with the oldest on the top:

Edsger W. Dijkstra (1989)
“TUG LINES,” Issue 32, August 1989
“Object oriented programs are offered as alternatives to correct ones” and “Object-oriented programming is an exceptionally bad idea which could only have originated in California.”

Alan Kay (1997)
The Computer Revolution hasn’t happened yet
“I invented the term object-oriented, and I can tell you I did not have C++ in mind.” and “Java and C++ make you think that the new ideas are like the old ones. Java is the most distressing thing to happen to computing since MS-DOS.” (proof)

Paul Graham (2003)
The Hundred-Year Language
“Object-oriented programming offers a sustainable way to write spaghetti code.”

Richard Mansfield (2005)
Has OOP Failed?
“With OOP-inflected programming languages, computer software becomes more verbose, less readable, less descriptive, and harder to modify and maintain.”

Eric Raymond (2005)
The Art of UNIX Programming
“The OO design concept initially proved valuable in the design of graphics systems, graphical user interfaces, and certain kinds of simulation. To the surprise and gradual disillusionment of many, it has proven difficult to demonstrate significant benefits of OO outside those areas.”

Jeff Atwood (2007)
Your Code: OOP or POO?
“OO seems to bring at least as many problems to the table as it solves.”

Linus Torvalds (2007)
this email
“C++ is a horrible language. … C++ leads to really, really bad design choices. … In other words, the only way to do good, efficient, and system-level and portable C++ ends up to limit yourself to all the things that are basically available in C. And limiting your project to C means that people don’t screw that up, and also means that you get a lot of programmers that do actually understand low-level issues and don’t screw things up with any idiotic “object model” crap.”

Oscar Nierstrasz (2010)
Ten Things I Hate About Object-Oriented Programming
“OOP is about taming complexity through modeling, but we have not mastered this yet, possibly because we have difficulty distinguishing real and accidental complexity.”

Rich Hickey (2010)
SE Radio, Episode 158
“I think that large objected-oriented programs struggle with increasing complexity as you build this large object graph of mutable objects. You know, trying to understand and keep in your mind what will happen when you call a method and what will the side effects be.”

Eric Allman (2011)
Programming Isn’t Fun Any More
“I used to be enamored of object-oriented programming. I’m now finding myself leaning toward believing that it is a plot designed to destroy joy. The methodology looks clean and elegant at first, but when you actually get into real programs they rapidly turn into horrid messes.”

Joe Armstrong (2011)
Why OO Sucks
“Objects bind functions and data structures together in indivisible units. I think this is a fundamental error since functions and data structures belong in totally different worlds.”

Rob Pike (2012)
here
“Object-oriented programming, whose essence is nothing more than programming using data with associated behaviors, is a powerful idea. It truly is. But it’s not always the best idea. … Sometimes data is just data and functions are just functions.”

John Barker (2013)
All evidence points to OOP being bullshit
“What OOP introduces are abstractions that attempt to improve code sharing and security. In many ways, it is still essentially procedural code.”

Lawrence Krubner (2014)
Object Oriented Programming is an expensive disaster which must end
“We now know that OOP is an experiment that failed. It is time to move on. It is time that we, as a community, admit that this idea has failed us, and we must give up on it.”

Asaf Shelly (2015)
Flaws of Object Oriented Modeling
“Reading an object oriented code you can’t see the big picture and it is often impossible to review all the small functions that call the one function that you modified.”
If you have something to add to this list, please post a comment below.
" /> a better understanding of an object in OOP would help us solve many problems in existing pseudo-object-oriented languages. Then, suddenly, the question came up: “What problems?” I was puzzled. I thought it was obvious that the vast majority of modern software written in modern OO languages is unmaintainable and simply a mess. So I Googled a bit, and this is what I found (in chronological order).
The list of quotes is sorted in chronological order, with the oldest on the top:

Edsger W. Dijkstra (1989)
“TUG LINES,” Issue 32, August 1989
“Object oriented programs are offered as alternatives to correct ones” and “Object-oriented programming is an exceptionally bad idea which could only have originated in California.”

Alan Kay (1997)
The Computer Revolution hasn’t happened yet
“I invented the term object-oriented, and I can tell you I did not have C++ in mind.” and “Java and C++ make you think that the new ideas are like the old ones. Java is the most distressing thing to happen to computing since MS-DOS.” (proof)

Paul Graham (2003)
The Hundred-Year Language
“Object-oriented programming offers a sustainable way to write spaghetti code.”

Richard Mansfield (2005)
Has OOP Failed?
“With OOP-inflected programming languages, computer software becomes more verbose, less readable, less descriptive, and harder to modify and maintain.”

Eric Raymond (2005)
The Art of UNIX Programming
“The OO design concept initially proved valuable in the design of graphics systems, graphical user interfaces, and certain kinds of simulation. To the surprise and gradual disillusionment of many, it has proven difficult to demonstrate significant benefits of OO outside those areas.”

Jeff Atwood (2007)
Your Code: OOP or POO?
“OO seems to bring at least as many problems to the table as it solves.”

Linus Torvalds (2007)
this email
“C++ is a horrible language. … C++ leads to really, really bad design choices. … In other words, the only way to do good, efficient, and system-level and portable C++ ends up to limit yourself to all the things that are basically available in C. And limiting your project to C means that people don’t screw that up, and also means that you get a lot of programmers that do actually understand low-level issues and don’t screw things up with any idiotic “object model” crap.”

Oscar Nierstrasz (2010)
Ten Things I Hate About Object-Oriented Programming
“OOP is about taming complexity through modeling, but we have not mastered this yet, possibly because we have difficulty distinguishing real and accidental complexity.”

Rich Hickey (2010)
SE Radio, Episode 158
“I think that large objected-oriented programs struggle with increasing complexity as you build this large object graph of mutable objects. You know, trying to understand and keep in your mind what will happen when you call a method and what will the side effects be.”

Eric Allman (2011)
Programming Isn’t Fun Any More
“I used to be enamored of object-oriented programming. I’m now finding myself leaning toward believing that it is a plot designed to destroy joy. The methodology looks clean and elegant at first, but when you actually get into real programs they rapidly turn into horrid messes.”

Joe Armstrong (2011)
Why OO Sucks
“Objects bind functions and data structures together in indivisible units. I think this is a fundamental error since functions and data structures belong in totally different worlds.”

Rob Pike (2012)
here
“Object-oriented programming, whose essence is nothing more than programming using data with associated behaviors, is a powerful idea. It truly is. But it’s not always the best idea. … Sometimes data is just data and functions are just functions.”

John Barker (2013)
All evidence points to OOP being bullshit
“What OOP introduces are abstractions that attempt to improve code sharing and security. In many ways, it is still essentially procedural code.”

Lawrence Krubner (2014)
Object Oriented Programming is an expensive disaster which must end
“We now know that OOP is an experiment that failed. It is time to move on. It is time that we, as a community, admit that this idea has failed us, and we must give up on it.”

Asaf Shelly (2015)
Flaws of Object Oriented Modeling
“Reading an object oriented code you can’t see the big picture and it is often impossible to review all the small functions that call the one function that you modified.”
If you have something to add to this list, please post a comment below.
"/>
https://www.yegor256.com/2016/08/15/what-is-wrong-object-oriented-programming.html
What's Wrong With Object-Oriented Programming?
- Palo Alto, CA
- Yegor Bugayenko
- comments
- Discussed at:
- dzone
Recently, I was trying to convince a few of my readers that a better understanding of an object in OOP would help us solve many problems in existing pseudo-object-oriented languages. Then, suddenly, the question came up: “What problems?” I was puzzled. I thought it was obvious that the vast majority of modern software written in modern OO languages is unmaintainable and simply a mess. So I Googled a bit, and this is what I found (in chronological order).

The list of quotes is sorted in chronological order, with the oldest on the top:

Edsger W. Dijkstra (1989)
“TUG LINES,” Issue 32, August 1989
“Object oriented programs are offered as alternatives to correct ones” and “Object-oriented programming is an exceptionally bad idea which could only have originated in California.”

Alan Kay (1997)
The Computer Revolution hasn’t happened yet
“I invented the term object-oriented, and I can tell you I did not have C++ in mind.” and “Java and C++ make you think that the new ideas are like the old ones. Java is the most distressing thing to happen to computing since MS-DOS.” (proof)

Paul Graham (2003)
The Hundred-Year Language
“Object-oriented programming offers a sustainable way to write spaghetti code.”

Richard Mansfield (2005)
Has OOP Failed?
“With OOP-inflected programming languages, computer software becomes more verbose, less readable, less descriptive, and harder to modify and maintain.”

Eric Raymond (2005)
The Art of UNIX Programming
“The OO design concept initially proved valuable in the design of graphics systems, graphical user interfaces, and certain kinds of simulation. To the surprise and gradual disillusionment of many, it has proven difficult to demonstrate significant benefits of OO outside those areas.”

Jeff Atwood (2007)
Your Code: OOP or POO?
“OO seems to bring at least as many problems to the table as it solves.”

Linus Torvalds (2007)
this email
“C++ is a horrible language. … C++ leads to really, really bad design choices. … In other words, the only way to do good, efficient, and system-level and portable C++ ends up to limit yourself to all the things that are basically available in C. And limiting your project to C means that people don’t screw that up, and also means that you get a lot of programmers that do actually understand low-level issues and don’t screw things up with any idiotic “object model” crap.”

Oscar Nierstrasz (2010)
Ten Things I Hate About Object-Oriented Programming
“OOP is about taming complexity through modeling, but we have not mastered this yet, possibly because we have difficulty distinguishing real and accidental complexity.”

Rich Hickey (2010)
SE Radio, Episode 158
“I think that large objected-oriented programs struggle with increasing complexity as you build this large object graph of mutable objects. You know, trying to understand and keep in your mind what will happen when you call a method and what will the side effects be.”

Eric Allman (2011)
Programming Isn’t Fun Any More
“I used to be enamored of object-oriented programming. I’m now finding myself leaning toward believing that it is a plot designed to destroy joy. The methodology looks clean and elegant at first, but when you actually get into real programs they rapidly turn into horrid messes.”

Joe Armstrong (2011)
Why OO Sucks
“Objects bind functions and data structures together in indivisible units. I think this is a fundamental error since functions and data structures belong in totally different worlds.”

Rob Pike (2012)
here
“Object-oriented programming, whose essence is nothing more than programming using data with associated behaviors, is a powerful idea. It truly is. But it’s not always the best idea. … Sometimes data is just data and functions are just functions.”

John Barker (2013)
All evidence points to OOP being bullshit
“What OOP introduces are abstractions that attempt to improve code sharing and security. In many ways, it is still essentially procedural code.”

Lawrence Krubner (2014)
Object Oriented Programming is an expensive disaster which must end
“We now know that OOP is an experiment that failed. It is time to move on. It is time that we, as a community, admit that this idea has failed us, and we must give up on it.”

Asaf Shelly (2015)
Flaws of Object Oriented Modeling
“Reading an object oriented code you can’t see the big picture and it is often impossible to review all the small functions that call the one function that you modified.”
If you have something to add to this list, please post a comment below.
Recently, I was trying to convince a few of my readers that a better understanding of an object in OOP would help us solve many problems in existing pseudo-object-oriented languages. Then, suddenly, the question came up: “What problems?” I was puzzled. I thought it was obvious that the vast majority of modern software written in modern OO languages is unmaintainable and simply a mess. So I Googled a bit, and this is what I found (in chronological order).

The list of quotes is sorted in chronological order, with the oldest on the top:

Edsger W. Dijkstra (1989)
“TUG LINES,” Issue 32, August 1989
“Object oriented programs are offered as alternatives to correct ones” and “Object-oriented programming is an exceptionally bad idea which could only have originated in California.”

Alan Kay (1997)
The Computer Revolution hasn’t happened yet
“I invented the term object-oriented, and I can tell you I did not have C++ in mind.” and “Java and C++ make you think that the new ideas are like the old ones. Java is the most distressing thing to happen to computing since MS-DOS.” (proof)

Paul Graham (2003)
The Hundred-Year Language
“Object-oriented programming offers a sustainable way to write spaghetti code.”

Richard Mansfield (2005)
Has OOP Failed?
“With OOP-inflected programming languages, computer software becomes more verbose, less readable, less descriptive, and harder to modify and maintain.”

Eric Raymond (2005)
The Art of UNIX Programming
“The OO design concept initially proved valuable in the design of graphics systems, graphical user interfaces, and certain kinds of simulation. To the surprise and gradual disillusionment of many, it has proven difficult to demonstrate significant benefits of OO outside those areas.”

Jeff Atwood (2007)
Your Code: OOP or POO?
“OO seems to bring at least as many problems to the table as it solves.”

Linus Torvalds (2007)
this email
“C++ is a horrible language. … C++ leads to really, really bad design choices. … In other words, the only way to do good, efficient, and system-level and portable C++ ends up to limit yourself to all the things that are basically available in C. And limiting your project to C means that people don’t screw that up, and also means that you get a lot of programmers that do actually understand low-level issues and don’t screw things up with any idiotic “object model” crap.”

Oscar Nierstrasz (2010)
Ten Things I Hate About Object-Oriented Programming
“OOP is about taming complexity through modeling, but we have not mastered this yet, possibly because we have difficulty distinguishing real and accidental complexity.”

Rich Hickey (2010)
SE Radio, Episode 158
“I think that large objected-oriented programs struggle with increasing complexity as you build this large object graph of mutable objects. You know, trying to understand and keep in your mind what will happen when you call a method and what will the side effects be.”

Eric Allman (2011)
Programming Isn’t Fun Any More
“I used to be enamored of object-oriented programming. I’m now finding myself leaning toward believing that it is a plot designed to destroy joy. The methodology looks clean and elegant at first, but when you actually get into real programs they rapidly turn into horrid messes.”

Joe Armstrong (2011)
Why OO Sucks
“Objects bind functions and data structures together in indivisible units. I think this is a fundamental error since functions and data structures belong in totally different worlds.”

Rob Pike (2012)
here
“Object-oriented programming, whose essence is nothing more than programming using data with associated behaviors, is a powerful idea. It truly is. But it’s not always the best idea. … Sometimes data is just data and functions are just functions.”

John Barker (2013)
All evidence points to OOP being bullshit
“What OOP introduces are abstractions that attempt to improve code sharing and security. In many ways, it is still essentially procedural code.”

Lawrence Krubner (2014)
Object Oriented Programming is an expensive disaster which must end
“We now know that OOP is an experiment that failed. It is time to move on. It is time that we, as a community, admit that this idea has failed us, and we must give up on it.”

Asaf Shelly (2015)
Flaws of Object Oriented Modeling
“Reading an object oriented code you can’t see the big picture and it is often impossible to review all the small functions that call the one function that you modified.”
If you have something to add to this list, please post a comment below.
Please, use syntax highlighting in your comments, to make them more readable.

Take a look at the class DyTalk from yegor256/rultor and its method modify(). In a nutshell, it prevents you from saving any data to the DynamoDB if there were no modifications of the XML document. It’s a valid case, and it has to be validated, but the way it’s implemented is simply wrong. This is how it works (an oversimplified example):
class DyTalk implements Talk {
void modify(Collection<Directive> dirs) {
if (!dirs.isEmpty()) {
// Apply the modification
// and save the new XML document
// to the DynamoDB table.
}
}
}What’s wrong, you wonder? This if-then-else forking functionality doesn’t really belong to this object—that’s what’s wrong. Modifying the XML document and saving it to the database is its functionality, while not saving anything if the modification instructions set is empty is not (it’s very similar to defensive programming). Instead, there should be a decorator, which would look like this:
class QuickTalk implements Talk {
private final Talk origin;
void modify(Collection<Directive> dirs) {
if (!dirs.isEmpty()) {
this.origin.modify(dirs);
}
}
}Now, if and when we need our talk to be more clever in situations where the list of directives is empty, we decorate it with QuickTalk. The benefits are obvious: the DyTalk class is smaller and therefore more cohesive.
But the question is bigger than just that. Can we make a rule out of it? Can we say that each and every forking is bad and should be moved out of a class? What about forking that happens inside a method and can’t be converted to a decorator?
I’m suggesting this simple rule: If it’s possible to convert if-then-else forking to a decorator, it has to be done. If it’s not done, it’s a code smell. Make sense?
" />
Take a look at the class DyTalk from yegor256/rultor and its method modify(). In a nutshell, it prevents you from saving any data to the DynamoDB if there were no modifications of the XML document. It’s a valid case, and it has to be validated, but the way it’s implemented is simply wrong. This is how it works (an oversimplified example):
class DyTalk implements Talk {
void modify(Collection<Directive> dirs) {
if (!dirs.isEmpty()) {
// Apply the modification
// and save the new XML document
// to the DynamoDB table.
}
}
}What’s wrong, you wonder? This if-then-else forking functionality doesn’t really belong to this object—that’s what’s wrong. Modifying the XML document and saving it to the database is its functionality, while not saving anything if the modification instructions set is empty is not (it’s very similar to defensive programming). Instead, there should be a decorator, which would look like this:
class QuickTalk implements Talk {
private final Talk origin;
void modify(Collection<Directive> dirs) {
if (!dirs.isEmpty()) {
this.origin.modify(dirs);
}
}
}Now, if and when we need our talk to be more clever in situations where the list of directives is empty, we decorate it with QuickTalk. The benefits are obvious: the DyTalk class is smaller and therefore more cohesive.
But the question is bigger than just that. Can we make a rule out of it? Can we say that each and every forking is bad and should be moved out of a class? What about forking that happens inside a method and can’t be converted to a decorator?
I’m suggesting this simple rule: If it’s possible to convert if-then-else forking to a decorator, it has to be done. If it’s not done, it’s a code smell. Make sense?
"/>
https://www.yegor256.com/2016/08/10/if-then-else-code-smell.html
If-Then-Else Is a Code Smell
- Tallinn, Estonia
- Yegor Bugayenko
- comments
In most cases (maybe even in all of them), if-then-else can and must be replaced by a decorator or simply another object. I’ve been planning to write about this for almost a year but only today found a real case in my own code that perfectly illustrates the problem. So it’s time to demonstrate it and explain.

Take a look at the class DyTalk from yegor256/rultor and its method modify(). In a nutshell, it prevents you from saving any data to the DynamoDB if there were no modifications of the XML document. It’s a valid case, and it has to be validated, but the way it’s implemented is simply wrong. This is how it works (an oversimplified example):
class DyTalk implements Talk {
void modify(Collection<Directive> dirs) {
if (!dirs.isEmpty()) {
// Apply the modification
// and save the new XML document
// to the DynamoDB table.
}
}
}What’s wrong, you wonder? This if-then-else forking functionality doesn’t really belong to this object—that’s what’s wrong. Modifying the XML document and saving it to the database is its functionality, while not saving anything if the modification instructions set is empty is not (it’s very similar to defensive programming). Instead, there should be a decorator, which would look like this:
class QuickTalk implements Talk {
private final Talk origin;
void modify(Collection<Directive> dirs) {
if (!dirs.isEmpty()) {
this.origin.modify(dirs);
}
}
}Now, if and when we need our talk to be more clever in situations where the list of directives is empty, we decorate it with QuickTalk. The benefits are obvious: the DyTalk class is smaller and therefore more cohesive.
But the question is bigger than just that. Can we make a rule out of it? Can we say that each and every forking is bad and should be moved out of a class? What about forking that happens inside a method and can’t be converted to a decorator?
I’m suggesting this simple rule: If it’s possible to convert if-then-else forking to a decorator, it has to be done. If it’s not done, it’s a code smell. Make sense?
In most cases (maybe even in all of them), if-then-else can and must be replaced by a decorator or simply another object. I’ve been planning to write about this for almost a year but only today found a real case in my own code that perfectly illustrates the problem. So it’s time to demonstrate it and explain.

Take a look at the class DyTalk from yegor256/rultor and its method modify(). In a nutshell, it prevents you from saving any data to the DynamoDB if there were no modifications of the XML document. It’s a valid case, and it has to be validated, but the way it’s implemented is simply wrong. This is how it works (an oversimplified example):
class DyTalk implements Talk {
void modify(Collection<Directive> dirs) {
if (!dirs.isEmpty()) {
// Apply the modification
// and save the new XML document
// to the DynamoDB table.
}
}
}What’s wrong, you wonder? This if-then-else forking functionality doesn’t really belong to this object—that’s what’s wrong. Modifying the XML document and saving it to the database is its functionality, while not saving anything if the modification instructions set is empty is not (it’s very similar to defensive programming). Instead, there should be a decorator, which would look like this:
class QuickTalk implements Talk {
private final Talk origin;
void modify(Collection<Directive> dirs) {
if (!dirs.isEmpty()) {
this.origin.modify(dirs);
}
}
}Now, if and when we need our talk to be more clever in situations where the list of directives is empty, we decorate it with QuickTalk. The benefits are obvious: the DyTalk class is smaller and therefore more cohesive.
But the question is bigger than just that. Can we make a rule out of it? Can we say that each and every forking is bad and should be moved out of a class? What about forking that happens inside a method and can’t be converted to a decorator?
I’m suggesting this simple rule: If it’s possible to convert if-then-else forking to a decorator, it has to be done. If it’s not done, it’s a code smell. Make sense?
Please, use syntax highlighting in your comments, to make them more readable.

Moreover, they claimed that ActiveRecord actually solves the problem I’ve found in ORM. They said I should explain in my talks that what I’m offering (SQL-speaking objects) already exists and has a name: ActiveRecord.
I disagree. Moreover, I think that ActiveRecord is even worse than ORM.
ORM consists of two parts: the session and DTOs, also known as “entities.” The entities have no functionality; they are just primitive containers for the data transferred from and to the session. And that is what the problem is—objects don’t encapsulate but rather expose data. To understand why this is wrong and why it’s against the object paradigm, you can read here, here, here, here, and here. Now, let’s just agree that it’s very wrong and move on.
What solution is ActiveRecord proposing? How is it solving the problem? It moves the engine into the parent class, which all our entities inherit from. This is how we were supposed to save our entity to the database in the ORM scenario (pseudo-code):
book.setTitle("Java in a Nutshell");
session.update(book);And this is what we do with an ActiveRecord:
book.setTitle("Java in a Nutshell");
book.update();The method update() is defined in book’s parent class and uses book as a data container. When called, it fetches data from the container (the book) and updates the database. How is it different than ORM? There is absolutely no difference. The book is still a container that knows nothing about SQL and any persistence mechanisms.
What’s even worse in ActiveRecord, compared to ORM, is that it hides the fact that objects are data containers. A book, in the second snippet, pretends to be a proper object, while in reality it’s just a dumb data bag.
I believe this is what misguided those who were saying that my SQL-speaking objects concept is exactly the same as the ActiveRecord design pattern (or Repository, which is almost exactly the same).
No, it’s not.
" /> I think about ORM, a very popular design pattern. In a nutshell, it encourages us to turn objects into DTOs, which are anemic, passive, and not objects at all. The consequences are usually dramatic—the entire programming paradigm shifts from object-oriented to procedural. I’ve tried to explain this at a JPoint and JEEConf this year. After each talk, a few people told me that what I’m suggesting is called ActiveRecord or Repository patterns.
Moreover, they claimed that ActiveRecord actually solves the problem I’ve found in ORM. They said I should explain in my talks that what I’m offering (SQL-speaking objects) already exists and has a name: ActiveRecord.
I disagree. Moreover, I think that ActiveRecord is even worse than ORM.
ORM consists of two parts: the session and DTOs, also known as “entities.” The entities have no functionality; they are just primitive containers for the data transferred from and to the session. And that is what the problem is—objects don’t encapsulate but rather expose data. To understand why this is wrong and why it’s against the object paradigm, you can read here, here, here, here, and here. Now, let’s just agree that it’s very wrong and move on.
What solution is ActiveRecord proposing? How is it solving the problem? It moves the engine into the parent class, which all our entities inherit from. This is how we were supposed to save our entity to the database in the ORM scenario (pseudo-code):
book.setTitle("Java in a Nutshell");
session.update(book);And this is what we do with an ActiveRecord:
book.setTitle("Java in a Nutshell");
book.update();The method update() is defined in book’s parent class and uses book as a data container. When called, it fetches data from the container (the book) and updates the database. How is it different than ORM? There is absolutely no difference. The book is still a container that knows nothing about SQL and any persistence mechanisms.
What’s even worse in ActiveRecord, compared to ORM, is that it hides the fact that objects are data containers. A book, in the second snippet, pretends to be a proper object, while in reality it’s just a dumb data bag.
I believe this is what misguided those who were saying that my SQL-speaking objects concept is exactly the same as the ActiveRecord design pattern (or Repository, which is almost exactly the same).
No, it’s not.
"/>
https://www.yegor256.com/2016/07/26/active-record.html
ActiveRecord Is Even Worse Than ORM
- Palo Alto, CA
- Yegor Bugayenko
- comments
- Discussed at:
- dzone
You probably remember what I think about ORM, a very popular design pattern. In a nutshell, it encourages us to turn objects into DTOs, which are anemic, passive, and not objects at all. The consequences are usually dramatic—the entire programming paradigm shifts from object-oriented to procedural. I’ve tried to explain this at a JPoint and JEEConf this year. After each talk, a few people told me that what I’m suggesting is called ActiveRecord or Repository patterns.

Moreover, they claimed that ActiveRecord actually solves the problem I’ve found in ORM. They said I should explain in my talks that what I’m offering (SQL-speaking objects) already exists and has a name: ActiveRecord.
I disagree. Moreover, I think that ActiveRecord is even worse than ORM.
ORM consists of two parts: the session and DTOs, also known as “entities.” The entities have no functionality; they are just primitive containers for the data transferred from and to the session. And that is what the problem is—objects don’t encapsulate but rather expose data. To understand why this is wrong and why it’s against the object paradigm, you can read here, here, here, here, and here. Now, let’s just agree that it’s very wrong and move on.
What solution is ActiveRecord proposing? How is it solving the problem? It moves the engine into the parent class, which all our entities inherit from. This is how we were supposed to save our entity to the database in the ORM scenario (pseudo-code):
book.setTitle("Java in a Nutshell");
session.update(book);And this is what we do with an ActiveRecord:
book.setTitle("Java in a Nutshell");
book.update();The method update() is defined in book’s parent class and uses book as a data container. When called, it fetches data from the container (the book) and updates the database. How is it different than ORM? There is absolutely no difference. The book is still a container that knows nothing about SQL and any persistence mechanisms.
What’s even worse in ActiveRecord, compared to ORM, is that it hides the fact that objects are data containers. A book, in the second snippet, pretends to be a proper object, while in reality it’s just a dumb data bag.
I believe this is what misguided those who were saying that my SQL-speaking objects concept is exactly the same as the ActiveRecord design pattern (or Repository, which is almost exactly the same).
No, it’s not.
You probably remember what I think about ORM, a very popular design pattern. In a nutshell, it encourages us to turn objects into DTOs, which are anemic, passive, and not objects at all. The consequences are usually dramatic—the entire programming paradigm shifts from object-oriented to procedural. I’ve tried to explain this at a JPoint and JEEConf this year. After each talk, a few people told me that what I’m suggesting is called ActiveRecord or Repository patterns.

Moreover, they claimed that ActiveRecord actually solves the problem I’ve found in ORM. They said I should explain in my talks that what I’m offering (SQL-speaking objects) already exists and has a name: ActiveRecord.
I disagree. Moreover, I think that ActiveRecord is even worse than ORM.
ORM consists of two parts: the session and DTOs, also known as “entities.” The entities have no functionality; they are just primitive containers for the data transferred from and to the session. And that is what the problem is—objects don’t encapsulate but rather expose data. To understand why this is wrong and why it’s against the object paradigm, you can read here, here, here, here, and here. Now, let’s just agree that it’s very wrong and move on.
What solution is ActiveRecord proposing? How is it solving the problem? It moves the engine into the parent class, which all our entities inherit from. This is how we were supposed to save our entity to the database in the ORM scenario (pseudo-code):
book.setTitle("Java in a Nutshell");
session.update(book);And this is what we do with an ActiveRecord:
book.setTitle("Java in a Nutshell");
book.update();The method update() is defined in book’s parent class and uses book as a data container. When called, it fetches data from the container (the book) and updates the database. How is it different than ORM? There is absolutely no difference. The book is still a container that knows nothing about SQL and any persistence mechanisms.
What’s even worse in ActiveRecord, compared to ORM, is that it hides the fact that objects are data containers. A book, in the second snippet, pretends to be a proper object, while in reality it’s just a dumb data bag.
I believe this is what misguided those who were saying that my SQL-speaking objects concept is exactly the same as the ActiveRecord design pattern (or Repository, which is almost exactly the same).
No, it’s not.
Please, use syntax highlighting in your comments, to make them more readable.
book.pages().last().text(). Instead, we’re supposed to go with book.textOfLastPage(). It puzzled me, because I strongly disagree. I believe the first construct is perfectly valid in OOP. So I’ve done some research to find out whether this law is really a law. What I found out is that the law is perfect, but its common understanding in the OOP world is simply wrong (not surprisingly).
Object-Oriented Programming: An Objective Sense of Style, K.Lieberherr, I.Holland, and A.Riel, OOPSLA’88 Proceedings, 1988.
This is where it was introduced. Let’s see what it literally says (look for Section 3 in that PDF document):
For all classes C, and for all methods M attached to C, all objects to which M sends a message must be instances of classes associated with the following classes: 1) the argument classes of M (including C), 2) the instance variable classes of C.
Say it’s a Java class:
class C {
private B b;
void m(A a) {
b.hello();
a.hello();
Singleton.INSTANCE.hello();
new Z().hello();
}
}All four calls to four different hello() methods are legal, according to the LoD. So what would be illegal, I ask myself? No surprise; the answer is this: a.x.hello(). That would be illegal. Directly accessing the attribute from another object and then talking to it is not allowed by the law.
But we don’t do that anyway. We’re talking about book.pages().last().text(). In this chain of method calls, we’re not accessing any attributes. We’re asking our objects to build new objects for us. What does the law say about that? Let me read it and quote:
Objects created by M, or by functions or methods that M calls, are considered as arguments of M
In other words, the object Pages that method call book.pages() returns is a perfectly valid object that can be used. Then, we can call method last() on it and get an object Page, and then call method text(), etc. This is a perfectly valid scenario that doesn’t violate the law at all, just as I expected.
So where does this common understanding of the law come from? Why does Wikipedia call it a rule of “one dot” and say that “an object should avoid invoking methods of a member object returned by another method?” This is absolutely to the contrary of what the original paper says! What’s going on?
The answer is simple: getters.
The majority of OOP developers think most object methods that return anything are getters. And getters, indeed, are no different than direct access to object attributes. That’s why Wikipedia actually says “no direct access to attributes and, since most of your methods are getters, don’t touch them either, silly.”
That’s just sad to see.
So the bottom line is that the Law of Demeter is not against method chaining at all. Of course, it’s against getters and direct attribute access. But who isn’t, right?
" /> Law of Demeter (LoD). Someone asked me recently what I think about it. And not just what I think, but how it is possible to keep objects small and obey the LoD. According to the law, we’re not allowed to do something likebook.pages().last().text(). Instead, we’re supposed to go with book.textOfLastPage(). It puzzled me, because I strongly disagree. I believe the first construct is perfectly valid in OOP. So I’ve done some research to find out whether this law is really a law. What I found out is that the law is perfect, but its common understanding in the OOP world is simply wrong (not surprisingly).
Object-Oriented Programming: An Objective Sense of Style, K.Lieberherr, I.Holland, and A.Riel, OOPSLA’88 Proceedings, 1988.
This is where it was introduced. Let’s see what it literally says (look for Section 3 in that PDF document):
For all classes C, and for all methods M attached to C, all objects to which M sends a message must be instances of classes associated with the following classes: 1) the argument classes of M (including C), 2) the instance variable classes of C.
Say it’s a Java class:
class C {
private B b;
void m(A a) {
b.hello();
a.hello();
Singleton.INSTANCE.hello();
new Z().hello();
}
}All four calls to four different hello() methods are legal, according to the LoD. So what would be illegal, I ask myself? No surprise; the answer is this: a.x.hello(). That would be illegal. Directly accessing the attribute from another object and then talking to it is not allowed by the law.
But we don’t do that anyway. We’re talking about book.pages().last().text(). In this chain of method calls, we’re not accessing any attributes. We’re asking our objects to build new objects for us. What does the law say about that? Let me read it and quote:
Objects created by M, or by functions or methods that M calls, are considered as arguments of M
In other words, the object Pages that method call book.pages() returns is a perfectly valid object that can be used. Then, we can call method last() on it and get an object Page, and then call method text(), etc. This is a perfectly valid scenario that doesn’t violate the law at all, just as I expected.
So where does this common understanding of the law come from? Why does Wikipedia call it a rule of “one dot” and say that “an object should avoid invoking methods of a member object returned by another method?” This is absolutely to the contrary of what the original paper says! What’s going on?
The answer is simple: getters.
The majority of OOP developers think most object methods that return anything are getters. And getters, indeed, are no different than direct access to object attributes. That’s why Wikipedia actually says “no direct access to attributes and, since most of your methods are getters, don’t touch them either, silly.”
That’s just sad to see.
So the bottom line is that the Law of Demeter is not against method chaining at all. Of course, it’s against getters and direct attribute access. But who isn’t, right?
"/>
https://www.yegor256.com/2016/07/18/law-of-demeter.html
The Law of Demeter Doesn't Mean One Dot
- Palo Alto, CA
- Yegor Bugayenko
- comments
You’ve probably heard about that 30-year-old Law of Demeter (LoD). Someone asked me recently what I think about it. And not just what I think, but how it is possible to keep objects small and obey the LoD. According to the law, we’re not allowed to do something like book.pages().last().text(). Instead, we’re supposed to go with book.textOfLastPage(). It puzzled me, because I strongly disagree. I believe the first construct is perfectly valid in OOP. So I’ve done some research to find out whether this law is really a law. What I found out is that the law is perfect, but its common understanding in the OOP world is simply wrong (not surprisingly).

Object-Oriented Programming: An Objective Sense of Style, K.Lieberherr, I.Holland, and A.Riel, OOPSLA’88 Proceedings, 1988.
This is where it was introduced. Let’s see what it literally says (look for Section 3 in that PDF document):
For all classes C, and for all methods M attached to C, all objects to which M sends a message must be instances of classes associated with the following classes: 1) the argument classes of M (including C), 2) the instance variable classes of C.
Say it’s a Java class:
class C {
private B b;
void m(A a) {
b.hello();
a.hello();
Singleton.INSTANCE.hello();
new Z().hello();
}
}All four calls to four different hello() methods are legal, according to the LoD. So what would be illegal, I ask myself? No surprise; the answer is this: a.x.hello(). That would be illegal. Directly accessing the attribute from another object and then talking to it is not allowed by the law.
But we don’t do that anyway. We’re talking about book.pages().last().text(). In this chain of method calls, we’re not accessing any attributes. We’re asking our objects to build new objects for us. What does the law say about that? Let me read it and quote:
Objects created by M, or by functions or methods that M calls, are considered as arguments of M
In other words, the object Pages that method call book.pages() returns is a perfectly valid object that can be used. Then, we can call method last() on it and get an object Page, and then call method text(), etc. This is a perfectly valid scenario that doesn’t violate the law at all, just as I expected.
So where does this common understanding of the law come from? Why does Wikipedia call it a rule of “one dot” and say that “an object should avoid invoking methods of a member object returned by another method?” This is absolutely to the contrary of what the original paper says! What’s going on?
The answer is simple: getters.
The majority of OOP developers think most object methods that return anything are getters. And getters, indeed, are no different than direct access to object attributes. That’s why Wikipedia actually says “no direct access to attributes and, since most of your methods are getters, don’t touch them either, silly.”
That’s just sad to see.
So the bottom line is that the Law of Demeter is not against method chaining at all. Of course, it’s against getters and direct attribute access. But who isn’t, right?
You’ve probably heard about that 30-year-old Law of Demeter (LoD). Someone asked me recently what I think about it. And not just what I think, but how it is possible to keep objects small and obey the LoD. According to the law, we’re not allowed to do something like book.pages().last().text(). Instead, we’re supposed to go with book.textOfLastPage(). It puzzled me, because I strongly disagree. I believe the first construct is perfectly valid in OOP. So I’ve done some research to find out whether this law is really a law. What I found out is that the law is perfect, but its common understanding in the OOP world is simply wrong (not surprisingly).

Object-Oriented Programming: An Objective Sense of Style, K.Lieberherr, I.Holland, and A.Riel, OOPSLA’88 Proceedings, 1988.
This is where it was introduced. Let’s see what it literally says (look for Section 3 in that PDF document):
For all classes C, and for all methods M attached to C, all objects to which M sends a message must be instances of classes associated with the following classes: 1) the argument classes of M (including C), 2) the instance variable classes of C.
Say it’s a Java class:
class C {
private B b;
void m(A a) {
b.hello();
a.hello();
Singleton.INSTANCE.hello();
new Z().hello();
}
}All four calls to four different hello() methods are legal, according to the LoD. So what would be illegal, I ask myself? No surprise; the answer is this: a.x.hello(). That would be illegal. Directly accessing the attribute from another object and then talking to it is not allowed by the law.
But we don’t do that anyway. We’re talking about book.pages().last().text(). In this chain of method calls, we’re not accessing any attributes. We’re asking our objects to build new objects for us. What does the law say about that? Let me read it and quote:
Objects created by M, or by functions or methods that M calls, are considered as arguments of M
In other words, the object Pages that method call book.pages() returns is a perfectly valid object that can be used. Then, we can call method last() on it and get an object Page, and then call method text(), etc. This is a perfectly valid scenario that doesn’t violate the law at all, just as I expected.
So where does this common understanding of the law come from? Why does Wikipedia call it a rule of “one dot” and say that “an object should avoid invoking methods of a member object returned by another method?” This is absolutely to the contrary of what the original paper says! What’s going on?
The answer is simple: getters.
The majority of OOP developers think most object methods that return anything are getters. And getters, indeed, are no different than direct access to object attributes. That’s why Wikipedia actually says “no direct access to attributes and, since most of your methods are getters, don’t touch them either, silly.”
That’s just sad to see.
So the bottom line is that the Law of Demeter is not against method chaining at all. Of course, it’s against getters and direct attribute access. But who isn’t, right?
Please, use syntax highlighting in your comments, to make them more readable.

What is an object? I’ve done a little research, and this is what I’ve found:
“Objects may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods”—Wikipedia at the time of writing.
“An object stores its state in fields and exposes its behavior through methods”—What Is an Object? by Oracle.
“Each object looks quite a bit like a little computer—it has a state, and it has operations that you can ask it to perform”—Thinking in Java, 4th Ed., Bruce Eckel, p. 16.
“A class is a collection of data fields that hold values and methods that operate on those values”—Java in a Nutshell, 6th Ed., Evans and Flanagan, p. 98.
“An object is some memory that holds a value of some type”—The C++ Programming Language, 4th Ed., Bjarne Stroustrup, p. 40.
“An object consists of some private memory and a set of operations”—Smalltalk-80, Goldberg and Robson, p. 6.
What is common throughout all these definitions is the word “contains” (or “holds,” “consists,” “has,” etc.). They all think that an object is a box with data. And this perspective is exactly what I’m strongly against.
If we look at how C++ or Java are implemented, such a definition of an object will sound technically correct. Indeed, for each object, Java Virtual Machine allocates a few bytes in memory in order to store object attributes there. Thus, we can technically say, in that language, that an object is an in-memory box with data.
Right, but this is just a corner case!
Let’s try to imagine another object-oriented language that doesn’t store object attributes in memory. Confused? Bear with me for a minute. Let’s say that in that language we define an object:
c {
vin: v,
engine: e
}Here, vin and engine are attributes of object c (it’s a car; let’s forget about classes for now to focus strictly on objects). Thus, there is a simple object that has two attributes. The first one is car’s VIN, and the second one is its engine. The VIN is an object v, while the engine is e. To make it easier to understand, this is how a similar object would look in Java:
char[] v = {'W','D','B','H',...'7','2','8','8'}; // 17 chars
Engine e = new Engine();
Car c = new Car(v, e);I’m not entirely sure about JVM, but in C++ such an object will take exactly 25 bytes in memory (assuming it’s 64-bit x86 architecture). The first 17 bytes will be taken by the array of chars and another 8 bytes by a pointer to the block in memory with object e. That’s how the C++ compiler understands objects and translates them to the x86 architecture. In C++, objects are just data structures with clearly defined allocation of data attributes.
In that example, attributes vin and engine are not equal: vin is “data,” while engine is a “pointer” to another object. I intentionally made it this way in order to demonstrate that calling an object a box with data is possible only with vin. Only when the data are located right “inside” the object can we say that the object is actually a box for the data. With engine, it isn’t really true because there is no data technically inside the object. Instead, there is a pointer to another object. If our object would only have an engine attribute, it would take just 8 bytes in memory, with none of them actually occupied by “data.”
Now, let’s get back to our new pseudo language. Let’s imagine it treats objects very differently than C++—it doesn’t keep object attributes in memory at all. It doesn’t have pointers, and it doesn’t know anything about x86 architecture. It just knows somehow what attributes belong to an object.
Thus, in our language, objects are no longer boxes with data both technically and conceptually. They know where the data is, but they don’t contain the data. They represent the data, as well as other objects and entities. Indeed, the object c in our imaginary language represents two other objects: a VIN and an engine.
To summarize, we have to understand that even though a mechanical definition of an object is correct in most programming languages on the market at the moment, it is very incorrect conceptually because it treats an object as a box with data that are too visible to the outside world. That visibility provokes us to think procedurally and try to access that data as much as possible.

If we would think of an object as a representative of data instead of a container of them, we would not want to get a hold of data as soon as possible. We would understand that the data are far away and we can’t just easily touch them. We should communicate with an object—and how exactly it communicates with the data is not our concern.
I hope that in the near future, the market will introduce new object-oriented languages that won’t store objects as in-memory data structures, even technically.
By the way, here is the definition of an object from my favorite book, Object Thinking by David West, p. 66:
An object is the equivalent of the quanta from which the universe is constructed
What do you think? Is it close to the “representative” definition I just proposed?
" />
What is an object? I’ve done a little research, and this is what I’ve found:
“Objects may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods”—Wikipedia at the time of writing.
“An object stores its state in fields and exposes its behavior through methods”—What Is an Object? by Oracle.
“Each object looks quite a bit like a little computer—it has a state, and it has operations that you can ask it to perform”—Thinking in Java, 4th Ed., Bruce Eckel, p. 16.
“A class is a collection of data fields that hold values and methods that operate on those values”—Java in a Nutshell, 6th Ed., Evans and Flanagan, p. 98.
“An object is some memory that holds a value of some type”—The C++ Programming Language, 4th Ed., Bjarne Stroustrup, p. 40.
“An object consists of some private memory and a set of operations”—Smalltalk-80, Goldberg and Robson, p. 6.
What is common throughout all these definitions is the word “contains” (or “holds,” “consists,” “has,” etc.). They all think that an object is a box with data. And this perspective is exactly what I’m strongly against.
If we look at how C++ or Java are implemented, such a definition of an object will sound technically correct. Indeed, for each object, Java Virtual Machine allocates a few bytes in memory in order to store object attributes there. Thus, we can technically say, in that language, that an object is an in-memory box with data.
Right, but this is just a corner case!
Let’s try to imagine another object-oriented language that doesn’t store object attributes in memory. Confused? Bear with me for a minute. Let’s say that in that language we define an object:
c {
vin: v,
engine: e
}Here, vin and engine are attributes of object c (it’s a car; let’s forget about classes for now to focus strictly on objects). Thus, there is a simple object that has two attributes. The first one is car’s VIN, and the second one is its engine. The VIN is an object v, while the engine is e. To make it easier to understand, this is how a similar object would look in Java:
char[] v = {'W','D','B','H',...'7','2','8','8'}; // 17 chars
Engine e = new Engine();
Car c = new Car(v, e);I’m not entirely sure about JVM, but in C++ such an object will take exactly 25 bytes in memory (assuming it’s 64-bit x86 architecture). The first 17 bytes will be taken by the array of chars and another 8 bytes by a pointer to the block in memory with object e. That’s how the C++ compiler understands objects and translates them to the x86 architecture. In C++, objects are just data structures with clearly defined allocation of data attributes.
In that example, attributes vin and engine are not equal: vin is “data,” while engine is a “pointer” to another object. I intentionally made it this way in order to demonstrate that calling an object a box with data is possible only with vin. Only when the data are located right “inside” the object can we say that the object is actually a box for the data. With engine, it isn’t really true because there is no data technically inside the object. Instead, there is a pointer to another object. If our object would only have an engine attribute, it would take just 8 bytes in memory, with none of them actually occupied by “data.”
Now, let’s get back to our new pseudo language. Let’s imagine it treats objects very differently than C++—it doesn’t keep object attributes in memory at all. It doesn’t have pointers, and it doesn’t know anything about x86 architecture. It just knows somehow what attributes belong to an object.
Thus, in our language, objects are no longer boxes with data both technically and conceptually. They know where the data is, but they don’t contain the data. They represent the data, as well as other objects and entities. Indeed, the object c in our imaginary language represents two other objects: a VIN and an engine.
To summarize, we have to understand that even though a mechanical definition of an object is correct in most programming languages on the market at the moment, it is very incorrect conceptually because it treats an object as a box with data that are too visible to the outside world. That visibility provokes us to think procedurally and try to access that data as much as possible.

If we would think of an object as a representative of data instead of a container of them, we would not want to get a hold of data as soon as possible. We would understand that the data are far away and we can’t just easily touch them. We should communicate with an object—and how exactly it communicates with the data is not our concern.
I hope that in the near future, the market will introduce new object-oriented languages that won’t store objects as in-memory data structures, even technically.
By the way, here is the definition of an object from my favorite book, Object Thinking by David West, p. 66:
An object is the equivalent of the quanta from which the universe is constructed
What do you think? Is it close to the “representative” definition I just proposed?
"/>
https://www.yegor256.com/2016/07/14/who-is-object.html
Who Is an Object?
- Palo Alto, CA
- Yegor Bugayenko
- comments
- Translated:
- Russian
- add yours!
There are thousands of books about object-oriented programming and hundreds of object-oriented languages, and I believe most (read “all”) of them give us an incorrect definition of an “object.” That’s why the entire OOP world is so full of misconceptions and mistakes. Their definition of an object is limited by the hardware architecture they are working with and that’s why is very primitive and mechanical. I’d like to introduce a better one.

What is an object? I’ve done a little research, and this is what I’ve found:
“Objects may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods”—Wikipedia at the time of writing.
“An object stores its state in fields and exposes its behavior through methods”—What Is an Object? by Oracle.
“Each object looks quite a bit like a little computer—it has a state, and it has operations that you can ask it to perform”—Thinking in Java, 4th Ed., Bruce Eckel, p. 16.
“A class is a collection of data fields that hold values and methods that operate on those values”—Java in a Nutshell, 6th Ed., Evans and Flanagan, p. 98.
“An object is some memory that holds a value of some type”—The C++ Programming Language, 4th Ed., Bjarne Stroustrup, p. 40.
“An object consists of some private memory and a set of operations”—Smalltalk-80, Goldberg and Robson, p. 6.
What is common throughout all these definitions is the word “contains” (or “holds,” “consists,” “has,” etc.). They all think that an object is a box with data. And this perspective is exactly what I’m strongly against.
If we look at how C++ or Java are implemented, such a definition of an object will sound technically correct. Indeed, for each object, Java Virtual Machine allocates a few bytes in memory in order to store object attributes there. Thus, we can technically say, in that language, that an object is an in-memory box with data.
Right, but this is just a corner case!
Let’s try to imagine another object-oriented language that doesn’t store object attributes in memory. Confused? Bear with me for a minute. Let’s say that in that language we define an object:
c {
vin: v,
engine: e
}Here, vin and engine are attributes of object c (it’s a car; let’s forget about classes for now to focus strictly on objects). Thus, there is a simple object that has two attributes. The first one is car’s VIN, and the second one is its engine. The VIN is an object v, while the engine is e. To make it easier to understand, this is how a similar object would look in Java:
char[] v = {'W','D','B','H',...'7','2','8','8'}; // 17 chars
Engine e = new Engine();
Car c = new Car(v, e);I’m not entirely sure about JVM, but in C++ such an object will take exactly 25 bytes in memory (assuming it’s 64-bit x86 architecture). The first 17 bytes will be taken by the array of chars and another 8 bytes by a pointer to the block in memory with object e. That’s how the C++ compiler understands objects and translates them to the x86 architecture. In C++, objects are just data structures with clearly defined allocation of data attributes.
In that example, attributes vin and engine are not equal: vin is “data,” while engine is a “pointer” to another object. I intentionally made it this way in order to demonstrate that calling an object a box with data is possible only with vin. Only when the data are located right “inside” the object can we say that the object is actually a box for the data. With engine, it isn’t really true because there is no data technically inside the object. Instead, there is a pointer to another object. If our object would only have an engine attribute, it would take just 8 bytes in memory, with none of them actually occupied by “data.”
Now, let’s get back to our new pseudo language. Let’s imagine it treats objects very differently than C++—it doesn’t keep object attributes in memory at all. It doesn’t have pointers, and it doesn’t know anything about x86 architecture. It just knows somehow what attributes belong to an object.
Thus, in our language, objects are no longer boxes with data both technically and conceptually. They know where the data is, but they don’t contain the data. They represent the data, as well as other objects and entities. Indeed, the object c in our imaginary language represents two other objects: a VIN and an engine.
To summarize, we have to understand that even though a mechanical definition of an object is correct in most programming languages on the market at the moment, it is very incorrect conceptually because it treats an object as a box with data that are too visible to the outside world. That visibility provokes us to think procedurally and try to access that data as much as possible.

If we would think of an object as a representative of data instead of a container of them, we would not want to get a hold of data as soon as possible. We would understand that the data are far away and we can’t just easily touch them. We should communicate with an object—and how exactly it communicates with the data is not our concern.
I hope that in the near future, the market will introduce new object-oriented languages that won’t store objects as in-memory data structures, even technically.
By the way, here is the definition of an object from my favorite book, Object Thinking by David West, p. 66:
An object is the equivalent of the quanta from which the universe is constructed
What do you think? Is it close to the “representative” definition I just proposed?
There are thousands of books about object-oriented programming and hundreds of object-oriented languages, and I believe most (read “all”) of them give us an incorrect definition of an “object.” That’s why the entire OOP world is so full of misconceptions and mistakes. Their definition of an object is limited by the hardware architecture they are working with and that’s why is very primitive and mechanical. I’d like to introduce a better one.

What is an object? I’ve done a little research, and this is what I’ve found:
“Objects may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods”—Wikipedia at the time of writing.
“An object stores its state in fields and exposes its behavior through methods”—What Is an Object? by Oracle.
“Each object looks quite a bit like a little computer—it has a state, and it has operations that you can ask it to perform”—Thinking in Java, 4th Ed., Bruce Eckel, p. 16.
“A class is a collection of data fields that hold values and methods that operate on those values”—Java in a Nutshell, 6th Ed., Evans and Flanagan, p. 98.
“An object is some memory that holds a value of some type”—The C++ Programming Language, 4th Ed., Bjarne Stroustrup, p. 40.
“An object consists of some private memory and a set of operations”—Smalltalk-80, Goldberg and Robson, p. 6.
What is common throughout all these definitions is the word “contains” (or “holds,” “consists,” “has,” etc.). They all think that an object is a box with data. And this perspective is exactly what I’m strongly against.
If we look at how C++ or Java are implemented, such a definition of an object will sound technically correct. Indeed, for each object, Java Virtual Machine allocates a few bytes in memory in order to store object attributes there. Thus, we can technically say, in that language, that an object is an in-memory box with data.
Right, but this is just a corner case!
Let’s try to imagine another object-oriented language that doesn’t store object attributes in memory. Confused? Bear with me for a minute. Let’s say that in that language we define an object:
c {
vin: v,
engine: e
}Here, vin and engine are attributes of object c (it’s a car; let’s forget about classes for now to focus strictly on objects). Thus, there is a simple object that has two attributes. The first one is car’s VIN, and the second one is its engine. The VIN is an object v, while the engine is e. To make it easier to understand, this is how a similar object would look in Java:
char[] v = {'W','D','B','H',...'7','2','8','8'}; // 17 chars
Engine e = new Engine();
Car c = new Car(v, e);I’m not entirely sure about JVM, but in C++ such an object will take exactly 25 bytes in memory (assuming it’s 64-bit x86 architecture). The first 17 bytes will be taken by the array of chars and another 8 bytes by a pointer to the block in memory with object e. That’s how the C++ compiler understands objects and translates them to the x86 architecture. In C++, objects are just data structures with clearly defined allocation of data attributes.
In that example, attributes vin and engine are not equal: vin is “data,” while engine is a “pointer” to another object. I intentionally made it this way in order to demonstrate that calling an object a box with data is possible only with vin. Only when the data are located right “inside” the object can we say that the object is actually a box for the data. With engine, it isn’t really true because there is no data technically inside the object. Instead, there is a pointer to another object. If our object would only have an engine attribute, it would take just 8 bytes in memory, with none of them actually occupied by “data.”
Now, let’s get back to our new pseudo language. Let’s imagine it treats objects very differently than C++—it doesn’t keep object attributes in memory at all. It doesn’t have pointers, and it doesn’t know anything about x86 architecture. It just knows somehow what attributes belong to an object.
Thus, in our language, objects are no longer boxes with data both technically and conceptually. They know where the data is, but they don’t contain the data. They represent the data, as well as other objects and entities. Indeed, the object c in our imaginary language represents two other objects: a VIN and an engine.
To summarize, we have to understand that even though a mechanical definition of an object is correct in most programming languages on the market at the moment, it is very incorrect conceptually because it treats an object as a box with data that are too visible to the outside world. That visibility provokes us to think procedurally and try to access that data as much as possible.

If we would think of an object as a representative of data instead of a container of them, we would not want to get a hold of data as soon as possible. We would understand that the data are far away and we can’t just easily touch them. We should communicate with an object—and how exactly it communicates with the data is not our concern.
I hope that in the near future, the market will introduce new object-oriented languages that won’t store objects as in-memory data structures, even technically.
By the way, here is the definition of an object from my favorite book, Object Thinking by David West, p. 66:
An object is the equivalent of the quanta from which the universe is constructed
What do you think? Is it close to the “representative” definition I just proposed?
Please, use syntax highlighting in your comments, to make them more readable.

By the way, his name, to my knowledge, was Martin Fowler. Maybe he was not the sole inventor of DTO, but he made it legal and recommended its use. With all due respect, he was just wrong.
The key idea of object-oriented programming is to hide data behind objects. This idea has a name: encapsulation. In OOP, data must not be visible. Objects must only have access to the data they encapsulate and never to the data encapsulated by other objects. There can be no arguing about this principle—it is what OOP is all about.
However, DTO runs completely against that principle.
Let’s see a practical example. Say that this is a service that fetches a JSON document from some RESTful API and returns a DTO, which we can then store in the database:
Book book = api.loadBookById(123);
database.saveNewBook(book);I guess this is what will happen inside the loadBookById() method:
Book loadBookById(int id) {
JsonObject json = /* Load it from RESTful API */
Book book = new Book();
book.setISBN(json.getString("isbn"));
book.setTitle(json.getString("title"));
book.setAuthor(json.getString("author"));
return book;
}Am I right? I bet I am. It already looks disgusting to me. Anyway, let’s continue. This is what will most likely happen in the saveNewBook() method (I’m using pure JDBC):
void saveNewBook(Book book) {
Statement stmt = connection.prepareStatement(
"INSERT INTO book VALUES (?, ?, ?)"
);
stmt.setString(1, book.getISBN());
stmt.setString(2, book.getTitle());
stmt.setString(3, book.getAuthor());
stmt.execute();
}This Book is a classic example of a data transfer object design pattern. All it does is transfer data between two pieces of code, two procedures. The object book is pretty dumb. All it knows how to do is … nothing. It doesn’t do anything. It is actually not an object at all but rather a passive and anemic data structure.
What is the right design? There are a few. For example, this one looks good to me:
Book book = api.bookById(123);
book.save(database);This is what happens in bookById():
Book bookById(int id) {
return new JsonBook(
/* RESTful API access point */
);
}This is what happens in Book.save():
void save(Database db) {
JsonObject json = /* Load it from RESTful API */
db.createBook(
json.getString("isbn"),
json.getString("title"),
json.getString("author")
);
}What happens if there are many more parameters of the book in JSON that won’t fit nicely as parameters into a single createBook() method? How about this:
void save(Database db) {
db.create()
.withISBN(json.getString("isbn"))
.withTitle(json.getString("title"))
.withAuthor(json.getString("author"))
.deploy();
}There are many other options. But the main point is that the data never escapes the object book. Once the object is instantiated, the data is not visible or accessible by anyone else. We may only ask our object to save itself or to print itself to some media, but we will never get any data from it.
The very idea of DTO is wrong because it turns object-oriented code into procedural code. We have procedures that manipulate data, and DTO is just a box for that data. Don’t think that way, and don’t do that.
PS. There are a few other names of DTO: business objects, domain objects (not in DDD), entity objects, JavaBeans.
" /> DTO, as far as I understand it, is a cornerstone of the ORM design pattern, which I simply “adore.” But let’s skip to the point: DTO is just a shame, and the man who invented it is just wrong. There is no excuse for what he has done.
By the way, his name, to my knowledge, was Martin Fowler. Maybe he was not the sole inventor of DTO, but he made it legal and recommended its use. With all due respect, he was just wrong.
The key idea of object-oriented programming is to hide data behind objects. This idea has a name: encapsulation. In OOP, data must not be visible. Objects must only have access to the data they encapsulate and never to the data encapsulated by other objects. There can be no arguing about this principle—it is what OOP is all about.
However, DTO runs completely against that principle.
Let’s see a practical example. Say that this is a service that fetches a JSON document from some RESTful API and returns a DTO, which we can then store in the database:
Book book = api.loadBookById(123);
database.saveNewBook(book);I guess this is what will happen inside the loadBookById() method:
Book loadBookById(int id) {
JsonObject json = /* Load it from RESTful API */
Book book = new Book();
book.setISBN(json.getString("isbn"));
book.setTitle(json.getString("title"));
book.setAuthor(json.getString("author"));
return book;
}Am I right? I bet I am. It already looks disgusting to me. Anyway, let’s continue. This is what will most likely happen in the saveNewBook() method (I’m using pure JDBC):
void saveNewBook(Book book) {
Statement stmt = connection.prepareStatement(
"INSERT INTO book VALUES (?, ?, ?)"
);
stmt.setString(1, book.getISBN());
stmt.setString(2, book.getTitle());
stmt.setString(3, book.getAuthor());
stmt.execute();
}This Book is a classic example of a data transfer object design pattern. All it does is transfer data between two pieces of code, two procedures. The object book is pretty dumb. All it knows how to do is … nothing. It doesn’t do anything. It is actually not an object at all but rather a passive and anemic data structure.
What is the right design? There are a few. For example, this one looks good to me:
Book book = api.bookById(123);
book.save(database);This is what happens in bookById():
Book bookById(int id) {
return new JsonBook(
/* RESTful API access point */
);
}This is what happens in Book.save():
void save(Database db) {
JsonObject json = /* Load it from RESTful API */
db.createBook(
json.getString("isbn"),
json.getString("title"),
json.getString("author")
);
}What happens if there are many more parameters of the book in JSON that won’t fit nicely as parameters into a single createBook() method? How about this:
void save(Database db) {
db.create()
.withISBN(json.getString("isbn"))
.withTitle(json.getString("title"))
.withAuthor(json.getString("author"))
.deploy();
}There are many other options. But the main point is that the data never escapes the object book. Once the object is instantiated, the data is not visible or accessible by anyone else. We may only ask our object to save itself or to print itself to some media, but we will never get any data from it.
The very idea of DTO is wrong because it turns object-oriented code into procedural code. We have procedures that manipulate data, and DTO is just a box for that data. Don’t think that way, and don’t do that.
PS. There are a few other names of DTO: business objects, domain objects (not in DDD), entity objects, JavaBeans.
"/>
https://www.yegor256.com/2016/07/06/data-transfer-object.html
Data Transfer Object Is a Shame
- Palo Alto, CA
- Yegor Bugayenko
- comments
DTO, as far as I understand it, is a cornerstone of the ORM design pattern, which I simply “adore.” But let’s skip to the point: DTO is just a shame, and the man who invented it is just wrong. There is no excuse for what he has done.

By the way, his name, to my knowledge, was Martin Fowler. Maybe he was not the sole inventor of DTO, but he made it legal and recommended its use. With all due respect, he was just wrong.
The key idea of object-oriented programming is to hide data behind objects. This idea has a name: encapsulation. In OOP, data must not be visible. Objects must only have access to the data they encapsulate and never to the data encapsulated by other objects. There can be no arguing about this principle—it is what OOP is all about.
However, DTO runs completely against that principle.
Let’s see a practical example. Say that this is a service that fetches a JSON document from some RESTful API and returns a DTO, which we can then store in the database:
Book book = api.loadBookById(123);
database.saveNewBook(book);I guess this is what will happen inside the loadBookById() method:
Book loadBookById(int id) {
JsonObject json = /* Load it from RESTful API */
Book book = new Book();
book.setISBN(json.getString("isbn"));
book.setTitle(json.getString("title"));
book.setAuthor(json.getString("author"));
return book;
}Am I right? I bet I am. It already looks disgusting to me. Anyway, let’s continue. This is what will most likely happen in the saveNewBook() method (I’m using pure JDBC):
void saveNewBook(Book book) {
Statement stmt = connection.prepareStatement(
"INSERT INTO book VALUES (?, ?, ?)"
);
stmt.setString(1, book.getISBN());
stmt.setString(2, book.getTitle());
stmt.setString(3, book.getAuthor());
stmt.execute();
}This Book is a classic example of a data transfer object design pattern. All it does is transfer data between two pieces of code, two procedures. The object book is pretty dumb. All it knows how to do is … nothing. It doesn’t do anything. It is actually not an object at all but rather a passive and anemic data structure.
What is the right design? There are a few. For example, this one looks good to me:
Book book = api.bookById(123);
book.save(database);This is what happens in bookById():
Book bookById(int id) {
return new JsonBook(
/* RESTful API access point */
);
}This is what happens in Book.save():
void save(Database db) {
JsonObject json = /* Load it from RESTful API */
db.createBook(
json.getString("isbn"),
json.getString("title"),
json.getString("author")
);
}What happens if there are many more parameters of the book in JSON that won’t fit nicely as parameters into a single createBook() method? How about this:
void save(Database db) {
db.create()
.withISBN(json.getString("isbn"))
.withTitle(json.getString("title"))
.withAuthor(json.getString("author"))
.deploy();
}There are many other options. But the main point is that the data never escapes the object book. Once the object is instantiated, the data is not visible or accessible by anyone else. We may only ask our object to save itself or to print itself to some media, but we will never get any data from it.
The very idea of DTO is wrong because it turns object-oriented code into procedural code. We have procedures that manipulate data, and DTO is just a box for that data. Don’t think that way, and don’t do that.
PS. There are a few other names of DTO: business objects, domain objects (not in DDD), entity objects, JavaBeans.
DTO, as far as I understand it, is a cornerstone of the ORM design pattern, which I simply “adore.” But let’s skip to the point: DTO is just a shame, and the man who invented it is just wrong. There is no excuse for what he has done.

By the way, his name, to my knowledge, was Martin Fowler. Maybe he was not the sole inventor of DTO, but he made it legal and recommended its use. With all due respect, he was just wrong.
The key idea of object-oriented programming is to hide data behind objects. This idea has a name: encapsulation. In OOP, data must not be visible. Objects must only have access to the data they encapsulate and never to the data encapsulated by other objects. There can be no arguing about this principle—it is what OOP is all about.
However, DTO runs completely against that principle.
Let’s see a practical example. Say that this is a service that fetches a JSON document from some RESTful API and returns a DTO, which we can then store in the database:
Book book = api.loadBookById(123);
database.saveNewBook(book);I guess this is what will happen inside the loadBookById() method:
Book loadBookById(int id) {
JsonObject json = /* Load it from RESTful API */
Book book = new Book();
book.setISBN(json.getString("isbn"));
book.setTitle(json.getString("title"));
book.setAuthor(json.getString("author"));
return book;
}Am I right? I bet I am. It already looks disgusting to me. Anyway, let’s continue. This is what will most likely happen in the saveNewBook() method (I’m using pure JDBC):
void saveNewBook(Book book) {
Statement stmt = connection.prepareStatement(
"INSERT INTO book VALUES (?, ?, ?)"
);
stmt.setString(1, book.getISBN());
stmt.setString(2, book.getTitle());
stmt.setString(3, book.getAuthor());
stmt.execute();
}This Book is a classic example of a data transfer object design pattern. All it does is transfer data between two pieces of code, two procedures. The object book is pretty dumb. All it knows how to do is … nothing. It doesn’t do anything. It is actually not an object at all but rather a passive and anemic data structure.
What is the right design? There are a few. For example, this one looks good to me:
Book book = api.bookById(123);
book.save(database);This is what happens in bookById():
Book bookById(int id) {
return new JsonBook(
/* RESTful API access point */
);
}This is what happens in Book.save():
void save(Database db) {
JsonObject json = /* Load it from RESTful API */
db.createBook(
json.getString("isbn"),
json.getString("title"),
json.getString("author")
);
}What happens if there are many more parameters of the book in JSON that won’t fit nicely as parameters into a single createBook() method? How about this:
void save(Database db) {
db.create()
.withISBN(json.getString("isbn"))
.withTitle(json.getString("title"))
.withAuthor(json.getString("author"))
.deploy();
}There are many other options. But the main point is that the data never escapes the object book. Once the object is instantiated, the data is not visible or accessible by anyone else. We may only ask our object to save itself or to print itself to some media, but we will never get any data from it.
The very idea of DTO is wrong because it turns object-oriented code into procedural code. We have procedures that manipulate data, and DTO is just a box for that data. Don’t think that way, and don’t do that.
PS. There are a few other names of DTO: business objects, domain objects (not in DDD), entity objects, JavaBeans.
Please, use syntax highlighting in your comments, to make them more readable.

I assume you already know what a singleton is and why it’s an anti-pattern. If not, I recommend you read this StackOverflow thread: What is so bad about singletons?
Now that we agree it’s a bad deal, what do we do if we need to, let’s say, have access to a database connection pool in many different places within the application? We simply need something like this:
class Database {
public static Database INSTANCE = new Database();
private Database() {
// create a connection pool
}
public java.sql.Connection connect() {
// Get new connection from the pool
// and return
}
}Later in at, say, the JAX-RS REST method, we need to retrieve something from the database:
@Path("/")
class Index {
@GET
public String text() {
java.sql.Connection connection =
Database.INSTANCE.connect();
return new JdbcSession(connection)
.sql("SELECT text FROM table")
.fetch(new SingleOutcome(String.class))
}
}In case you’re not familiar with JAX-RS, it’s a simple MVC architecture, and this text() method is a “controller.” Additionally, I’m using JdbcSession, a simple JDBC wrapper from jcabi-jdbc.
We need that Database.INSTANCE to be a singleton, right? We need it to be globally available so that any MVC controller can have direct access to it. Since we all understand and agree that a singleton is an evil thing, what do we replace it with?
A dependency injection is the answer.
We need to make this database connection pool dependency of the controller and ensure it’s provided through a constructor. However, in this particular case, for JAX-RS, we can’t do it through a constructor thanks to its ugly architecture. But we can create a ServletContextListener, instantiate a Database in its contextInitialized() method, and add that instance as an attribute of servletContext. Then, inside the controller, we retrieve the servlet context by adding the javax.ws.rs.core.Context annotation to a setter and using getAttribute() on it. This is absolutely terrible and procedural, but it’s better than a singleton.
A proper object-oriented design would pass an instance of Database to all objects that may need it through their constructors.
Nonetheless, what do we do if there are many dependencies? Do we make a 10-argument constructor? No, we don’t. If our objects really need 10 dependencies to do their work, we need to break them down into smaller ones.
That’s it. Forget about singletons; never use them. Turn them into dependencies and pass them from object to object through the operator new.

I assume you already know what a singleton is and why it’s an anti-pattern. If not, I recommend you read this StackOverflow thread: What is so bad about singletons?
Now that we agree it’s a bad deal, what do we do if we need to, let’s say, have access to a database connection pool in many different places within the application? We simply need something like this:
class Database {
public static Database INSTANCE = new Database();
private Database() {
// create a connection pool
}
public java.sql.Connection connect() {
// Get new connection from the pool
// and return
}
}Later in at, say, the JAX-RS REST method, we need to retrieve something from the database:
@Path("/")
class Index {
@GET
public String text() {
java.sql.Connection connection =
Database.INSTANCE.connect();
return new JdbcSession(connection)
.sql("SELECT text FROM table")
.fetch(new SingleOutcome(String.class))
}
}In case you’re not familiar with JAX-RS, it’s a simple MVC architecture, and this text() method is a “controller.” Additionally, I’m using JdbcSession, a simple JDBC wrapper from jcabi-jdbc.
We need that Database.INSTANCE to be a singleton, right? We need it to be globally available so that any MVC controller can have direct access to it. Since we all understand and agree that a singleton is an evil thing, what do we replace it with?
A dependency injection is the answer.
We need to make this database connection pool dependency of the controller and ensure it’s provided through a constructor. However, in this particular case, for JAX-RS, we can’t do it through a constructor thanks to its ugly architecture. But we can create a ServletContextListener, instantiate a Database in its contextInitialized() method, and add that instance as an attribute of servletContext. Then, inside the controller, we retrieve the servlet context by adding the javax.ws.rs.core.Context annotation to a setter and using getAttribute() on it. This is absolutely terrible and procedural, but it’s better than a singleton.
A proper object-oriented design would pass an instance of Database to all objects that may need it through their constructors.
Nonetheless, what do we do if there are many dependencies? Do we make a 10-argument constructor? No, we don’t. If our objects really need 10 dependencies to do their work, we need to break them down into smaller ones.
That’s it. Forget about singletons; never use them. Turn them into dependencies and pass them from object to object through the operator new.
https://www.yegor256.com/2016/06/27/singletons-must-die.html
Singletons Must Die
- Los Angeles, CA
- Yegor Bugayenko
- comments
- Discussed at:
- dzone
I think it’s too obvious to say that a singleton is an anti-pattern as there are tons of articles about that (singleton being an anti-pattern). However, more often than not, the question is how to define global things without a singleton; and the answer to that is not obvious for many of us. There are several examples: a database connection pool, a repository, a configuration map, etc. They all naturally seem to be “global”; but what do we do with them?

I assume you already know what a singleton is and why it’s an anti-pattern. If not, I recommend you read this StackOverflow thread: What is so bad about singletons?
Now that we agree it’s a bad deal, what do we do if we need to, let’s say, have access to a database connection pool in many different places within the application? We simply need something like this:
class Database {
public static Database INSTANCE = new Database();
private Database() {
// create a connection pool
}
public java.sql.Connection connect() {
// Get new connection from the pool
// and return
}
}Later in at, say, the JAX-RS REST method, we need to retrieve something from the database:
@Path("/")
class Index {
@GET
public String text() {
java.sql.Connection connection =
Database.INSTANCE.connect();
return new JdbcSession(connection)
.sql("SELECT text FROM table")
.fetch(new SingleOutcome(String.class))
}
}In case you’re not familiar with JAX-RS, it’s a simple MVC architecture, and this text() method is a “controller.” Additionally, I’m using JdbcSession, a simple JDBC wrapper from jcabi-jdbc.
We need that Database.INSTANCE to be a singleton, right? We need it to be globally available so that any MVC controller can have direct access to it. Since we all understand and agree that a singleton is an evil thing, what do we replace it with?
A dependency injection is the answer.
We need to make this database connection pool dependency of the controller and ensure it’s provided through a constructor. However, in this particular case, for JAX-RS, we can’t do it through a constructor thanks to its ugly architecture. But we can create a ServletContextListener, instantiate a Database in its contextInitialized() method, and add that instance as an attribute of servletContext. Then, inside the controller, we retrieve the servlet context by adding the javax.ws.rs.core.Context annotation to a setter and using getAttribute() on it. This is absolutely terrible and procedural, but it’s better than a singleton.
A proper object-oriented design would pass an instance of Database to all objects that may need it through their constructors.
Nonetheless, what do we do if there are many dependencies? Do we make a 10-argument constructor? No, we don’t. If our objects really need 10 dependencies to do their work, we need to break them down into smaller ones.
That’s it. Forget about singletons; never use them. Turn them into dependencies and pass them from object to object through the operator new.
I think it’s too obvious to say that a singleton is an anti-pattern as there are tons of articles about that (singleton being an anti-pattern). However, more often than not, the question is how to define global things without a singleton; and the answer to that is not obvious for many of us. There are several examples: a database connection pool, a repository, a configuration map, etc. They all naturally seem to be “global”; but what do we do with them?

I assume you already know what a singleton is and why it’s an anti-pattern. If not, I recommend you read this StackOverflow thread: What is so bad about singletons?
Now that we agree it’s a bad deal, what do we do if we need to, let’s say, have access to a database connection pool in many different places within the application? We simply need something like this:
class Database {
public static Database INSTANCE = new Database();
private Database() {
// create a connection pool
}
public java.sql.Connection connect() {
// Get new connection from the pool
// and return
}
}Later in at, say, the JAX-RS REST method, we need to retrieve something from the database:
@Path("/")
class Index {
@GET
public String text() {
java.sql.Connection connection =
Database.INSTANCE.connect();
return new JdbcSession(connection)
.sql("SELECT text FROM table")
.fetch(new SingleOutcome(String.class))
}
}In case you’re not familiar with JAX-RS, it’s a simple MVC architecture, and this text() method is a “controller.” Additionally, I’m using JdbcSession, a simple JDBC wrapper from jcabi-jdbc.
We need that Database.INSTANCE to be a singleton, right? We need it to be globally available so that any MVC controller can have direct access to it. Since we all understand and agree that a singleton is an evil thing, what do we replace it with?
A dependency injection is the answer.
We need to make this database connection pool dependency of the controller and ensure it’s provided through a constructor. However, in this particular case, for JAX-RS, we can’t do it through a constructor thanks to its ugly architecture. But we can create a ServletContextListener, instantiate a Database in its contextInitialized() method, and add that instance as an attribute of servletContext. Then, inside the controller, we retrieve the servlet context by adding the javax.ws.rs.core.Context annotation to a setter and using getAttribute() on it. This is absolutely terrible and procedural, but it’s better than a singleton.
A proper object-oriented design would pass an instance of Database to all objects that may need it through their constructors.
Nonetheless, what do we do if there are many dependencies? Do we make a 10-argument constructor? No, we don’t. If our objects really need 10 dependencies to do their work, we need to break them down into smaller ones.
That’s it. Forget about singletons; never use them. Turn them into dependencies and pass them from object to object through the operator new.
Please, use syntax highlighting in your comments, to make them more readable.

Unit tests, naturally, duplicate a lot of code. Test methods contain similar or almost identical functionality and this is almost inevitable. Well, we can use more of that @Before and @BeforeClass features, but sometimes it’s just not possible. We may have, say, 20 test methods in one FooTest.java file. Preparing all objects in one “before” is not possible. So we have to do certain things again and again in our test methods.
Let’s take a look at one of the classes in our Takes Framework: VerboseListTest. It’s a unit test and it has a problem, which I’m trying to tell you about. Look at that MSG private literal. It is used for the first time in setUp() method as an argument of an object constructor and then in a few test methods to check how that object behaves. Let me simplify that code:
class FooTest {
private static final String MSG = "something";
@Before
public final void setUp() throws Exception {
this.foo = new Foo(FooTest.MSG);
}
@Test
public void simplyWorks() throws IOException {
assertThat(
foo.doSomething(),
containsString(FooTest.MSG)
);
}
@Test
public void simplyWorksAgain() throws IOException {
assertThat(
foo.doSomethingElse(),
containsString(FooTest.MSG)
);
}
}This is basically what is happening in VerboseListTest and it’s very wrong. Why? Because this shared literal MSG introduced an unnatural coupling between these two test methods. They have nothing in common, because they test different behaviors of class Foo. But this private constant ties them together. Now they are somehow related.
If and when I want to modify one of the test methods, I may need to modify the other one too. Say I want to see how doSomethingElse() behaves if the encapsulated message is an empty string. What do I do? I change the value of the constant FooTest.MSG, which is used by another test method. This is called coupling. And it’s a bad thing.
What do we do? Well, we can use that "something" string literal in both test methods:
class FooTest {
@Test
public void simplyWorks() throws IOException {
assertThat(
new Foo("something").doSomething(),
containsString("something")
);
}
@Test
public void simplyWorksAgain() throws IOException {
assertThat(
new Foo("something").doSomethingElse(),
containsString("something")
);
}
}As you see, I got rid of that setUp() method and the private static literal MSG. What do we have now? Code duplication. String "something" shows up four times in the test class. No static analyzers will tolerate that. Moreover, there are seven (!) test methods in VerboseListTest, which are using MSG. Thus, we will have 14 occurrences of "something", right? Yes, that’s right and that’s most likely why one of authors of this test case introduced the constant—to get rid of duplication. BTW, @Happy-Neko did that in pull request #513, @carlosmiranda reviewed the code and I approved the changes. So, three people made/approved that mistake, including myself.
So what is the right approach that will avoid code duplication and at the same time won’t introduce coupling? Here it is:
class FooTest {
@Test
public void simplyWorks() throws IOException {
final String msg = "something";
assertThat(
new Foo(msg).doSomething(),
containsString(msg)
);
}
@Test
public void simplyWorksAgain() throws IOException {
final String msg = "something else";
assertThat(
new Foo(msg).doSomethingElse(),
containsString(msg)
);
}
}These literals must be different. This is what any static analyzer is saying when it sees "something" in so many places. It questions us—why are they the same? Is it really so important to use "something" everywhere? Why can’t you use different literals? Of course we can. And we should.
The bottom line is that each test method must have its own set of data and objects. They must not be shared between test methods ever. Test methods must always be independent, having nothing in common.
Having that in mind, we can easily conclude that methods like setUp() or any shared variables in test classes are evil. They must not be used and simply must not exist. I think that their invention in JUnit caused a lot of harm to Java code.

Unit tests, naturally, duplicate a lot of code. Test methods contain similar or almost identical functionality and this is almost inevitable. Well, we can use more of that @Before and @BeforeClass features, but sometimes it’s just not possible. We may have, say, 20 test methods in one FooTest.java file. Preparing all objects in one “before” is not possible. So we have to do certain things again and again in our test methods.
Let’s take a look at one of the classes in our Takes Framework: VerboseListTest. It’s a unit test and it has a problem, which I’m trying to tell you about. Look at that MSG private literal. It is used for the first time in setUp() method as an argument of an object constructor and then in a few test methods to check how that object behaves. Let me simplify that code:
class FooTest {
private static final String MSG = "something";
@Before
public final void setUp() throws Exception {
this.foo = new Foo(FooTest.MSG);
}
@Test
public void simplyWorks() throws IOException {
assertThat(
foo.doSomething(),
containsString(FooTest.MSG)
);
}
@Test
public void simplyWorksAgain() throws IOException {
assertThat(
foo.doSomethingElse(),
containsString(FooTest.MSG)
);
}
}This is basically what is happening in VerboseListTest and it’s very wrong. Why? Because this shared literal MSG introduced an unnatural coupling between these two test methods. They have nothing in common, because they test different behaviors of class Foo. But this private constant ties them together. Now they are somehow related.
If and when I want to modify one of the test methods, I may need to modify the other one too. Say I want to see how doSomethingElse() behaves if the encapsulated message is an empty string. What do I do? I change the value of the constant FooTest.MSG, which is used by another test method. This is called coupling. And it’s a bad thing.
What do we do? Well, we can use that "something" string literal in both test methods:
class FooTest {
@Test
public void simplyWorks() throws IOException {
assertThat(
new Foo("something").doSomething(),
containsString("something")
);
}
@Test
public void simplyWorksAgain() throws IOException {
assertThat(
new Foo("something").doSomethingElse(),
containsString("something")
);
}
}As you see, I got rid of that setUp() method and the private static literal MSG. What do we have now? Code duplication. String "something" shows up four times in the test class. No static analyzers will tolerate that. Moreover, there are seven (!) test methods in VerboseListTest, which are using MSG. Thus, we will have 14 occurrences of "something", right? Yes, that’s right and that’s most likely why one of authors of this test case introduced the constant—to get rid of duplication. BTW, @Happy-Neko did that in pull request #513, @carlosmiranda reviewed the code and I approved the changes. So, three people made/approved that mistake, including myself.
So what is the right approach that will avoid code duplication and at the same time won’t introduce coupling? Here it is:
class FooTest {
@Test
public void simplyWorks() throws IOException {
final String msg = "something";
assertThat(
new Foo(msg).doSomething(),
containsString(msg)
);
}
@Test
public void simplyWorksAgain() throws IOException {
final String msg = "something else";
assertThat(
new Foo(msg).doSomethingElse(),
containsString(msg)
);
}
}These literals must be different. This is what any static analyzer is saying when it sees "something" in so many places. It questions us—why are they the same? Is it really so important to use "something" everywhere? Why can’t you use different literals? Of course we can. And we should.
The bottom line is that each test method must have its own set of data and objects. They must not be shared between test methods ever. Test methods must always be independent, having nothing in common.
Having that in mind, we can easily conclude that methods like setUp() or any shared variables in test classes are evil. They must not be used and simply must not exist. I think that their invention in JUnit caused a lot of harm to Java code.
https://www.yegor256.com/2016/05/03/test-methods-must-share-nothing.html
Test Methods Must Share Nothing
- Palo Alto, CA
- Yegor Bugayenko
- comments
Constants… I wrote about them some time ago, mostly saying that they are a bad thing, if being public. They reduce duplication, but introduce coupling. A much better way to get rid of duplication is by creating new classes or methods—a traditional OOP method. This seems to make sense and in our projects I see less and less public constants. In some projects we don’t have them at all. But one thing still bothers me: unit tests. Most programmers seem to think that when static analysis says that there are too many similar literals in the same file, the best way to get rid of them is via a private static literal. This is just wrong.

Unit tests, naturally, duplicate a lot of code. Test methods contain similar or almost identical functionality and this is almost inevitable. Well, we can use more of that @Before and @BeforeClass features, but sometimes it’s just not possible. We may have, say, 20 test methods in one FooTest.java file. Preparing all objects in one “before” is not possible. So we have to do certain things again and again in our test methods.
Let’s take a look at one of the classes in our Takes Framework: VerboseListTest. It’s a unit test and it has a problem, which I’m trying to tell you about. Look at that MSG private literal. It is used for the first time in setUp() method as an argument of an object constructor and then in a few test methods to check how that object behaves. Let me simplify that code:
class FooTest {
private static final String MSG = "something";
@Before
public final void setUp() throws Exception {
this.foo = new Foo(FooTest.MSG);
}
@Test
public void simplyWorks() throws IOException {
assertThat(
foo.doSomething(),
containsString(FooTest.MSG)
);
}
@Test
public void simplyWorksAgain() throws IOException {
assertThat(
foo.doSomethingElse(),
containsString(FooTest.MSG)
);
}
}This is basically what is happening in VerboseListTest and it’s very wrong. Why? Because this shared literal MSG introduced an unnatural coupling between these two test methods. They have nothing in common, because they test different behaviors of class Foo. But this private constant ties them together. Now they are somehow related.
If and when I want to modify one of the test methods, I may need to modify the other one too. Say I want to see how doSomethingElse() behaves if the encapsulated message is an empty string. What do I do? I change the value of the constant FooTest.MSG, which is used by another test method. This is called coupling. And it’s a bad thing.
What do we do? Well, we can use that "something" string literal in both test methods:
class FooTest {
@Test
public void simplyWorks() throws IOException {
assertThat(
new Foo("something").doSomething(),
containsString("something")
);
}
@Test
public void simplyWorksAgain() throws IOException {
assertThat(
new Foo("something").doSomethingElse(),
containsString("something")
);
}
}As you see, I got rid of that setUp() method and the private static literal MSG. What do we have now? Code duplication. String "something" shows up four times in the test class. No static analyzers will tolerate that. Moreover, there are seven (!) test methods in VerboseListTest, which are using MSG. Thus, we will have 14 occurrences of "something", right? Yes, that’s right and that’s most likely why one of authors of this test case introduced the constant—to get rid of duplication. BTW, @Happy-Neko did that in pull request #513, @carlosmiranda reviewed the code and I approved the changes. So, three people made/approved that mistake, including myself.
So what is the right approach that will avoid code duplication and at the same time won’t introduce coupling? Here it is:
class FooTest {
@Test
public void simplyWorks() throws IOException {
final String msg = "something";
assertThat(
new Foo(msg).doSomething(),
containsString(msg)
);
}
@Test
public void simplyWorksAgain() throws IOException {
final String msg = "something else";
assertThat(
new Foo(msg).doSomethingElse(),
containsString(msg)
);
}
}These literals must be different. This is what any static analyzer is saying when it sees "something" in so many places. It questions us—why are they the same? Is it really so important to use "something" everywhere? Why can’t you use different literals? Of course we can. And we should.
The bottom line is that each test method must have its own set of data and objects. They must not be shared between test methods ever. Test methods must always be independent, having nothing in common.
Having that in mind, we can easily conclude that methods like setUp() or any shared variables in test classes are evil. They must not be used and simply must not exist. I think that their invention in JUnit caused a lot of harm to Java code.
Constants… I wrote about them some time ago, mostly saying that they are a bad thing, if being public. They reduce duplication, but introduce coupling. A much better way to get rid of duplication is by creating new classes or methods—a traditional OOP method. This seems to make sense and in our projects I see less and less public constants. In some projects we don’t have them at all. But one thing still bothers me: unit tests. Most programmers seem to think that when static analysis says that there are too many similar literals in the same file, the best way to get rid of them is via a private static literal. This is just wrong.

Unit tests, naturally, duplicate a lot of code. Test methods contain similar or almost identical functionality and this is almost inevitable. Well, we can use more of that @Before and @BeforeClass features, but sometimes it’s just not possible. We may have, say, 20 test methods in one FooTest.java file. Preparing all objects in one “before” is not possible. So we have to do certain things again and again in our test methods.
Let’s take a look at one of the classes in our Takes Framework: VerboseListTest. It’s a unit test and it has a problem, which I’m trying to tell you about. Look at that MSG private literal. It is used for the first time in setUp() method as an argument of an object constructor and then in a few test methods to check how that object behaves. Let me simplify that code:
class FooTest {
private static final String MSG = "something";
@Before
public final void setUp() throws Exception {
this.foo = new Foo(FooTest.MSG);
}
@Test
public void simplyWorks() throws IOException {
assertThat(
foo.doSomething(),
containsString(FooTest.MSG)
);
}
@Test
public void simplyWorksAgain() throws IOException {
assertThat(
foo.doSomethingElse(),
containsString(FooTest.MSG)
);
}
}This is basically what is happening in VerboseListTest and it’s very wrong. Why? Because this shared literal MSG introduced an unnatural coupling between these two test methods. They have nothing in common, because they test different behaviors of class Foo. But this private constant ties them together. Now they are somehow related.
If and when I want to modify one of the test methods, I may need to modify the other one too. Say I want to see how doSomethingElse() behaves if the encapsulated message is an empty string. What do I do? I change the value of the constant FooTest.MSG, which is used by another test method. This is called coupling. And it’s a bad thing.
What do we do? Well, we can use that "something" string literal in both test methods:
class FooTest {
@Test
public void simplyWorks() throws IOException {
assertThat(
new Foo("something").doSomething(),
containsString("something")
);
}
@Test
public void simplyWorksAgain() throws IOException {
assertThat(
new Foo("something").doSomethingElse(),
containsString("something")
);
}
}As you see, I got rid of that setUp() method and the private static literal MSG. What do we have now? Code duplication. String "something" shows up four times in the test class. No static analyzers will tolerate that. Moreover, there are seven (!) test methods in VerboseListTest, which are using MSG. Thus, we will have 14 occurrences of "something", right? Yes, that’s right and that’s most likely why one of authors of this test case introduced the constant—to get rid of duplication. BTW, @Happy-Neko did that in pull request #513, @carlosmiranda reviewed the code and I approved the changes. So, three people made/approved that mistake, including myself.
So what is the right approach that will avoid code duplication and at the same time won’t introduce coupling? Here it is:
class FooTest {
@Test
public void simplyWorks() throws IOException {
final String msg = "something";
assertThat(
new Foo(msg).doSomething(),
containsString(msg)
);
}
@Test
public void simplyWorksAgain() throws IOException {
final String msg = "something else";
assertThat(
new Foo(msg).doSomethingElse(),
containsString(msg)
);
}
}These literals must be different. This is what any static analyzer is saying when it sees "something" in so many places. It questions us—why are they the same? Is it really so important to use "something" everywhere? Why can’t you use different literals? Of course we can. And we should.
The bottom line is that each test method must have its own set of data and objects. They must not be shared between test methods ever. Test methods must always be independent, having nothing in common.
Having that in mind, we can easily conclude that methods like setUp() or any shared variables in test classes are evil. They must not be used and simply must not exist. I think that their invention in JUnit caused a lot of harm to Java code.
Please, use syntax highlighting in your comments, to make them more readable.
InputStream should have been an interface in the first place and it should have had a single method read(byte[]). Then if its authors wanted to give us extra functionality, they should have created supplementary “smart” classes.
This is how it looks now:
abstract class InputStream {
int read();
int read(byte[] buffer, int offset, int length);
int read(byte[] buffer);
}What’s wrong? It’s very convenient to have the ability to read a single byte, an array of bytes or even an array of bytes with a direct positioning into a specific place in the buffer!
However, we are still lacking a few methods: for reading the bytes and immediately saving into a file, converting to a text with a selected encoding, sending them by email and posting on Twitter. It would be great to have the features too, right in the poor InputStream. I hope the Oracle Java team is working on them now.
In the mean time, let’s see what exactly is wrong with what these bright engineers designed for us already. Or maybe let me show how I would design InputStream and we’ll compare:
interface InputStream {
int read(byte[] buffer, int offset, int length);
}This is my design. The InputStream is responsible for reading bytes from the stream. There is one single method for this feature. Is it convenient for everybody? Does it read and post on Twitter? Not yet. Do we need that functionality? Of course we do, but it doesn’t mean that we will add it to the interface. Instead, we will create supplementary “smart” class:
interface InputStream {
int read(byte[] buffer, int offset, int length);
class Smart {
private final InputStream origin;
public Smart(InputStream stream) {
this.origin = stream;
}
public int read() {
final byte[] buffer = new byte[1];
final int read = this.origin.read(buffer, 0, 1);
final int result;
if (read < 1) {
result = -1;
} else {
result = buffer[0];
}
return result;
}
}
}Now, we want to read a single byte from the stream. Here is how:
final InputStream input = new FileInputStream("/tmp/a.txt");
final byte b = new InputStream.Smart(input).read();The functionality of reading a single byte is outside of InputStream, because this is not its business. The stream doesn’t need to know how to manage the data after it is read. All the stream is responsible for is reading, not parsing or manipulating afterwards.
Interfaces must be small.
Obviously, method overloading in interfaces is a code smell. An interface with more than three methods is a good candidate for refactoring. If methods overload each other—it’s serious trouble.
Interfaces must be small!
You may say that the creators of InputStream cared about performance, that’s why allowed us to implement read() in three different forms. Then I have to ask again, why not create a method for reading and immediately post it on Twitter? That would be fantastically fast. Isn’t it what we all want? A fast software which nobody has any desire to read or maintain.
InputStream should have been an interface in the first place and it should have had a single method read(byte[]). Then if its authors wanted to give us extra functionality, they should have created supplementary “smart” classes.
This is how it looks now:
abstract class InputStream {
int read();
int read(byte[] buffer, int offset, int length);
int read(byte[] buffer);
}What’s wrong? It’s very convenient to have the ability to read a single byte, an array of bytes or even an array of bytes with a direct positioning into a specific place in the buffer!
However, we are still lacking a few methods: for reading the bytes and immediately saving into a file, converting to a text with a selected encoding, sending them by email and posting on Twitter. It would be great to have the features too, right in the poor InputStream. I hope the Oracle Java team is working on them now.
In the mean time, let’s see what exactly is wrong with what these bright engineers designed for us already. Or maybe let me show how I would design InputStream and we’ll compare:
interface InputStream {
int read(byte[] buffer, int offset, int length);
}This is my design. The InputStream is responsible for reading bytes from the stream. There is one single method for this feature. Is it convenient for everybody? Does it read and post on Twitter? Not yet. Do we need that functionality? Of course we do, but it doesn’t mean that we will add it to the interface. Instead, we will create supplementary “smart” class:
interface InputStream {
int read(byte[] buffer, int offset, int length);
class Smart {
private final InputStream origin;
public Smart(InputStream stream) {
this.origin = stream;
}
public int read() {
final byte[] buffer = new byte[1];
final int read = this.origin.read(buffer, 0, 1);
final int result;
if (read < 1) {
result = -1;
} else {
result = buffer[0];
}
return result;
}
}
}Now, we want to read a single byte from the stream. Here is how:
final InputStream input = new FileInputStream("/tmp/a.txt");
final byte b = new InputStream.Smart(input).read();The functionality of reading a single byte is outside of InputStream, because this is not its business. The stream doesn’t need to know how to manage the data after it is read. All the stream is responsible for is reading, not parsing or manipulating afterwards.
Interfaces must be small.
Obviously, method overloading in interfaces is a code smell. An interface with more than three methods is a good candidate for refactoring. If methods overload each other—it’s serious trouble.
Interfaces must be small!
You may say that the creators of InputStream cared about performance, that’s why allowed us to implement read() in three different forms. Then I have to ask again, why not create a method for reading and immediately post it on Twitter? That would be fantastically fast. Isn’t it what we all want? A fast software which nobody has any desire to read or maintain.
https://www.yegor256.com/2016/04/26/why-inputstream-design-is-wrong.html
Why InputStream Design Is Wrong
- Washington, D.C.
- Yegor Bugayenko
- comments
It’s not just about InputSteam, this class is a good example of a bad design. I’m talking about three overloaded methods read(). I’ve mentioned this problem in Section 2.9 of Elegant Objects. In a few words, I strongly believe that interfaces must be “functionality poor.” InputStream should have been an interface in the first place and it should have had a single method read(byte[]). Then if its authors wanted to give us extra functionality, they should have created supplementary “smart” classes.

This is how it looks now:
abstract class InputStream {
int read();
int read(byte[] buffer, int offset, int length);
int read(byte[] buffer);
}What’s wrong? It’s very convenient to have the ability to read a single byte, an array of bytes or even an array of bytes with a direct positioning into a specific place in the buffer!
However, we are still lacking a few methods: for reading the bytes and immediately saving into a file, converting to a text with a selected encoding, sending them by email and posting on Twitter. It would be great to have the features too, right in the poor InputStream. I hope the Oracle Java team is working on them now.
In the mean time, let’s see what exactly is wrong with what these bright engineers designed for us already. Or maybe let me show how I would design InputStream and we’ll compare:
interface InputStream {
int read(byte[] buffer, int offset, int length);
}This is my design. The InputStream is responsible for reading bytes from the stream. There is one single method for this feature. Is it convenient for everybody? Does it read and post on Twitter? Not yet. Do we need that functionality? Of course we do, but it doesn’t mean that we will add it to the interface. Instead, we will create supplementary “smart” class:
interface InputStream {
int read(byte[] buffer, int offset, int length);
class Smart {
private final InputStream origin;
public Smart(InputStream stream) {
this.origin = stream;
}
public int read() {
final byte[] buffer = new byte[1];
final int read = this.origin.read(buffer, 0, 1);
final int result;
if (read < 1) {
result = -1;
} else {
result = buffer[0];
}
return result;
}
}
}Now, we want to read a single byte from the stream. Here is how:
final InputStream input = new FileInputStream("/tmp/a.txt");
final byte b = new InputStream.Smart(input).read();The functionality of reading a single byte is outside of InputStream, because this is not its business. The stream doesn’t need to know how to manage the data after it is read. All the stream is responsible for is reading, not parsing or manipulating afterwards.
Interfaces must be small.
Obviously, method overloading in interfaces is a code smell. An interface with more than three methods is a good candidate for refactoring. If methods overload each other—it’s serious trouble.
Interfaces must be small!
You may say that the creators of InputStream cared about performance, that’s why allowed us to implement read() in three different forms. Then I have to ask again, why not create a method for reading and immediately post it on Twitter? That would be fantastically fast. Isn’t it what we all want? A fast software which nobody has any desire to read or maintain.
It’s not just about InputSteam, this class is a good example of a bad design. I’m talking about three overloaded methods read(). I’ve mentioned this problem in Section 2.9 of Elegant Objects. In a few words, I strongly believe that interfaces must be “functionality poor.” InputStream should have been an interface in the first place and it should have had a single method read(byte[]). Then if its authors wanted to give us extra functionality, they should have created supplementary “smart” classes.

This is how it looks now:
abstract class InputStream {
int read();
int read(byte[] buffer, int offset, int length);
int read(byte[] buffer);
}What’s wrong? It’s very convenient to have the ability to read a single byte, an array of bytes or even an array of bytes with a direct positioning into a specific place in the buffer!
However, we are still lacking a few methods: for reading the bytes and immediately saving into a file, converting to a text with a selected encoding, sending them by email and posting on Twitter. It would be great to have the features too, right in the poor InputStream. I hope the Oracle Java team is working on them now.
In the mean time, let’s see what exactly is wrong with what these bright engineers designed for us already. Or maybe let me show how I would design InputStream and we’ll compare:
interface InputStream {
int read(byte[] buffer, int offset, int length);
}This is my design. The InputStream is responsible for reading bytes from the stream. There is one single method for this feature. Is it convenient for everybody? Does it read and post on Twitter? Not yet. Do we need that functionality? Of course we do, but it doesn’t mean that we will add it to the interface. Instead, we will create supplementary “smart” class:
interface InputStream {
int read(byte[] buffer, int offset, int length);
class Smart {
private final InputStream origin;
public Smart(InputStream stream) {
this.origin = stream;
}
public int read() {
final byte[] buffer = new byte[1];
final int read = this.origin.read(buffer, 0, 1);
final int result;
if (read < 1) {
result = -1;
} else {
result = buffer[0];
}
return result;
}
}
}Now, we want to read a single byte from the stream. Here is how:
final InputStream input = new FileInputStream("/tmp/a.txt");
final byte b = new InputStream.Smart(input).read();The functionality of reading a single byte is outside of InputStream, because this is not its business. The stream doesn’t need to know how to manage the data after it is read. All the stream is responsible for is reading, not parsing or manipulating afterwards.
Interfaces must be small.
Obviously, method overloading in interfaces is a code smell. An interface with more than three methods is a good candidate for refactoring. If methods overload each other—it’s serious trouble.
Interfaces must be small!
You may say that the creators of InputStream cared about performance, that’s why allowed us to implement read() in three different forms. Then I have to ask again, why not create a method for reading and immediately post it on Twitter? That would be fantastically fast. Isn’t it what we all want? A fast software which nobody has any desire to read or maintain.
Please, use syntax highlighting in your comments, to make them more readable.

Let’s say there is a class that is supposed to read a web page and return its content:
class Page {
private final String uri;
Page(final String address) {
this.uri = address;
}
public String html() throws IOException {
return IOUtils.toString(
new URL(this.uri).openStream(),
"UTF-8"
);
}
}Looks simple and straight-forward, right? Yes, it’s a rather cohesive and solid class. Here is how we use it to read the content of Google front page:
String html = new Page("http://www.google.com").html();Everything is fine until we start making this class more powerful. Let’s say we want to configure the encoding. We don’t always want to use "UTF-8". We want it to be configurable. Here is what we do:
class Page {
private final String uri;
private final String encoding;
Page(final String address, final String enc) {
this.uri = address;
this.encoding = enc;
}
public String html() throws IOException {
return IOUtils.toString(
new URL(this.uri).openStream(),
this.encoding
);
}
}Done, the encoding is encapsulated and configurable. Now, let’s say we want to change the behavior of the class for the situation of an empty page. If an empty page is loaded, we want to return "<html/>". But not always. We want this to be configurable. Here is what we do:
class Page {
private final String uri;
private final String encoding;
private final boolean alwaysHtml;
Page(final String address, final String enc,
final boolean always) {
this.uri = address;
this.encoding = enc;
this.alwaysHtml = always;
}
public String html() throws IOException {
String html = IOUtils.toString(
new URL(this.uri).openStream(),
this.encoding
);
if (html.isEmpty() && this.alwaysHtml) {
html = "<html/>";
}
return html;
}
}The class is getting bigger, huh? It’s great, we’re good programmers and our code must be complex, right? The more complex it is, the better programmers we are! I’m being sarcastic. Definitely not! But let’s move on. Now we want our class to proceed anyway, even if the encoding is not supported on the current platform:
class Page {
private final String uri;
private final String encoding;
private final boolean alwaysHtml;
private final boolean encodeAnyway;
Page(final String address, final String enc,
final boolean always, final boolean encode) {
this.uri = address;
this.encoding = enc;
this.alwaysHtml = always;
this.encodeAnyway = encode;
}
public String html() throws IOException,
UnsupportedEncodingException {
final byte[] bytes = IOUtils.toByteArray(
new URL(this.uri).openStream()
);
String html;
try {
html = new String(bytes, this.encoding);
} catch (UnsupportedEncodingException ex) {
if (!this.encodeAnyway) {
throw ex;
}
html = new String(bytes, "UTF-8")
}
if (html.isEmpty() && this.alwaysHtml) {
html = "<html/>";
}
return html;
}
}The class is growing and becoming more and more powerful! Now it’s time to introduce a new class, which we will call PageSettings:
class Page {
private final String uri;
private final PageSettings settings;
Page(final String address, final PageSettings stts) {
this.uri = address;
this.settings = stts;
}
public String html() throws IOException {
final byte[] bytes = IOUtils.toByteArray(
new URL(this.uri).openStream()
);
String html;
try {
html = new String(bytes, this.settings.getEncoding());
} catch (UnsupportedEncodingException ex) {
if (!this.settings.isEncodeAnyway()) {
throw ex;
}
html = new String(bytes, "UTF-8")
}
if (html.isEmpty() && this.settings.isAlwaysHtml()) {
html = "<html/>";
}
return html;
}
}Class PageSettings is basically a holder of parameters, without any behavior. It has getters, which give us access to the parameters: isEncodeAnyway(), isAlwaysHtml(), and getEncoding(). If we keep going in this direction, there could be a few dozen configuration settings in that class. This may look very convenient and is a very typical pattern in Java world. For example, look at JobConf from Hadoop. This is how we will call our highly configurable Page (I’m assuming PageSettings is immutable):
String html = new Page(
"http://www.google.com",
new PageSettings()
.withEncoding("ISO_8859_1")
.withAlwaysHtml(true)
.withEncodeAnyway(false)
).html();However, no matter how convenient it may look at first glance, this approach is very wrong. Mostly because it encourages us to make big and non-cohesive objects. They grow in size and become less testable, less maintainable and less readable.
To prevent that from happening, I would suggest a simple rule here: object behavior should not be configurable. Or, more technically, encapsulated properties must not be used to change the behavior of an object.
Object properties are there only to coordinate the location of a real-world entity, which the object is representing. The uri is the coordinate, while the alwaysHtml boolean property is a behavior changing trigger. See the difference?
So, what should we do instead? What is the right design? We must use composable decorators. Here is how:
Page page = new NeverEmptyPage(
new DefaultPage("http://www.google.com")
)
String html = new AlwaysTextPage(
new TextPage(page, "ISO_8859_1")
page
).html();Here is how our DefaultPage would look (yes, I had to change its design a bit):
class DefaultPage implements Page {
private final String uri;
DefaultPage(final String address) {
this.uri = address;
}
@Override
public byte[] html() throws IOException {
return IOUtils.toByteArray(
new URL(this.uri).openStream()
);
}
}As you see, I’m making it implement interface Page. Now TextPage decorator, which converts an array of bytes to a text using provided encoding:
class TextPage {
private final Page origin;
private final String encoding;
TextPage(final Page page, final String enc) {
this.origin = page;
this.encoding = enc;
}
public String html() throws IOException {
return new String(
this.origin.html(),
this.encoding
);
}
}Now the NeverEmptyPage:
class NeverEmptyPage implements Page {
private final Page origin;
NeverEmptyPage(final Page page) {
this.origin = page;
}
@Override
public byte[] html() throws IOException {
byte[] bytes = this.origin.html();
if (bytes.length == 0) {
bytes = "<html/>".getBytes();
}
return bytes;
}
}And finally the AlwaysTextPage:
class AlwaysTextPage {
private final TextPage origin;
private final Page source;
AlwaysTextPage(final TextPage page, final Page src) {
this.origin = page;
this.source = src;
}
public String html() throws IOException {
String html;
try {
html = this.origin.html();
} catch (UnsupportedEncodingException ex) {
html = new TextPage(this.source, "UTF-8").html();
}
return html;
}
}You may say that AlwaysTextPage will make two calls to the encapsulated origin, in case of an unsupported encoding, which will lead to a duplicated HTTP request. That’s true and this is by design. We don’t want this duplicated HTTP roundtrip to happen. Let’s introduce one more class, which will cache the page fetched ( not thread-safe, but it’s not important now):
class OncePage implements Page {
private final Page origin;
private final AtomicReference<byte[]> cache =
new AtomicReference<>;
OncePage(final Page page) {
this.origin = page;
}
@Override
public byte[] html() throws IOException {
if (this.cache.get() == null) {
this.cache.set(this.origin.html());
}
return this.cache.get();
}
}Now, our code should look like this (pay attention, I’m now using OncePage):
Page page = new NeverEmptyPage(
new OncePage(
new DefaultPage("http://www.google.com")
)
)
String html = new AlwaysTextPage(
new TextPage(page, "ISO_8859_1")
"UTF-8"
).html();This is probably the most code-intensive post on this site so far, but I hope it’s readable and I managed to convey the idea. Now we have five classes, each of which is rather small, easy to read and easy to reuse.
Just follow the rule: never make classes configurable!
" /> philosophical point of view? I can, but let’s take a look at it from a practical perspective.
Let’s say there is a class that is supposed to read a web page and return its content:
class Page {
private final String uri;
Page(final String address) {
this.uri = address;
}
public String html() throws IOException {
return IOUtils.toString(
new URL(this.uri).openStream(),
"UTF-8"
);
}
}Looks simple and straight-forward, right? Yes, it’s a rather cohesive and solid class. Here is how we use it to read the content of Google front page:
String html = new Page("http://www.google.com").html();Everything is fine until we start making this class more powerful. Let’s say we want to configure the encoding. We don’t always want to use "UTF-8". We want it to be configurable. Here is what we do:
class Page {
private final String uri;
private final String encoding;
Page(final String address, final String enc) {
this.uri = address;
this.encoding = enc;
}
public String html() throws IOException {
return IOUtils.toString(
new URL(this.uri).openStream(),
this.encoding
);
}
}Done, the encoding is encapsulated and configurable. Now, let’s say we want to change the behavior of the class for the situation of an empty page. If an empty page is loaded, we want to return "<html/>". But not always. We want this to be configurable. Here is what we do:
class Page {
private final String uri;
private final String encoding;
private final boolean alwaysHtml;
Page(final String address, final String enc,
final boolean always) {
this.uri = address;
this.encoding = enc;
this.alwaysHtml = always;
}
public String html() throws IOException {
String html = IOUtils.toString(
new URL(this.uri).openStream(),
this.encoding
);
if (html.isEmpty() && this.alwaysHtml) {
html = "<html/>";
}
return html;
}
}The class is getting bigger, huh? It’s great, we’re good programmers and our code must be complex, right? The more complex it is, the better programmers we are! I’m being sarcastic. Definitely not! But let’s move on. Now we want our class to proceed anyway, even if the encoding is not supported on the current platform:
class Page {
private final String uri;
private final String encoding;
private final boolean alwaysHtml;
private final boolean encodeAnyway;
Page(final String address, final String enc,
final boolean always, final boolean encode) {
this.uri = address;
this.encoding = enc;
this.alwaysHtml = always;
this.encodeAnyway = encode;
}
public String html() throws IOException,
UnsupportedEncodingException {
final byte[] bytes = IOUtils.toByteArray(
new URL(this.uri).openStream()
);
String html;
try {
html = new String(bytes, this.encoding);
} catch (UnsupportedEncodingException ex) {
if (!this.encodeAnyway) {
throw ex;
}
html = new String(bytes, "UTF-8")
}
if (html.isEmpty() && this.alwaysHtml) {
html = "<html/>";
}
return html;
}
}The class is growing and becoming more and more powerful! Now it’s time to introduce a new class, which we will call PageSettings:
class Page {
private final String uri;
private final PageSettings settings;
Page(final String address, final PageSettings stts) {
this.uri = address;
this.settings = stts;
}
public String html() throws IOException {
final byte[] bytes = IOUtils.toByteArray(
new URL(this.uri).openStream()
);
String html;
try {
html = new String(bytes, this.settings.getEncoding());
} catch (UnsupportedEncodingException ex) {
if (!this.settings.isEncodeAnyway()) {
throw ex;
}
html = new String(bytes, "UTF-8")
}
if (html.isEmpty() && this.settings.isAlwaysHtml()) {
html = "<html/>";
}
return html;
}
}Class PageSettings is basically a holder of parameters, without any behavior. It has getters, which give us access to the parameters: isEncodeAnyway(), isAlwaysHtml(), and getEncoding(). If we keep going in this direction, there could be a few dozen configuration settings in that class. This may look very convenient and is a very typical pattern in Java world. For example, look at JobConf from Hadoop. This is how we will call our highly configurable Page (I’m assuming PageSettings is immutable):
String html = new Page(
"http://www.google.com",
new PageSettings()
.withEncoding("ISO_8859_1")
.withAlwaysHtml(true)
.withEncodeAnyway(false)
).html();However, no matter how convenient it may look at first glance, this approach is very wrong. Mostly because it encourages us to make big and non-cohesive objects. They grow in size and become less testable, less maintainable and less readable.
To prevent that from happening, I would suggest a simple rule here: object behavior should not be configurable. Or, more technically, encapsulated properties must not be used to change the behavior of an object.
Object properties are there only to coordinate the location of a real-world entity, which the object is representing. The uri is the coordinate, while the alwaysHtml boolean property is a behavior changing trigger. See the difference?
So, what should we do instead? What is the right design? We must use composable decorators. Here is how:
Page page = new NeverEmptyPage(
new DefaultPage("http://www.google.com")
)
String html = new AlwaysTextPage(
new TextPage(page, "ISO_8859_1")
page
).html();Here is how our DefaultPage would look (yes, I had to change its design a bit):
class DefaultPage implements Page {
private final String uri;
DefaultPage(final String address) {
this.uri = address;
}
@Override
public byte[] html() throws IOException {
return IOUtils.toByteArray(
new URL(this.uri).openStream()
);
}
}As you see, I’m making it implement interface Page. Now TextPage decorator, which converts an array of bytes to a text using provided encoding:
class TextPage {
private final Page origin;
private final String encoding;
TextPage(final Page page, final String enc) {
this.origin = page;
this.encoding = enc;
}
public String html() throws IOException {
return new String(
this.origin.html(),
this.encoding
);
}
}Now the NeverEmptyPage:
class NeverEmptyPage implements Page {
private final Page origin;
NeverEmptyPage(final Page page) {
this.origin = page;
}
@Override
public byte[] html() throws IOException {
byte[] bytes = this.origin.html();
if (bytes.length == 0) {
bytes = "<html/>".getBytes();
}
return bytes;
}
}And finally the AlwaysTextPage:
class AlwaysTextPage {
private final TextPage origin;
private final Page source;
AlwaysTextPage(final TextPage page, final Page src) {
this.origin = page;
this.source = src;
}
public String html() throws IOException {
String html;
try {
html = this.origin.html();
} catch (UnsupportedEncodingException ex) {
html = new TextPage(this.source, "UTF-8").html();
}
return html;
}
}You may say that AlwaysTextPage will make two calls to the encapsulated origin, in case of an unsupported encoding, which will lead to a duplicated HTTP request. That’s true and this is by design. We don’t want this duplicated HTTP roundtrip to happen. Let’s introduce one more class, which will cache the page fetched ( not thread-safe, but it’s not important now):
class OncePage implements Page {
private final Page origin;
private final AtomicReference<byte[]> cache =
new AtomicReference<>;
OncePage(final Page page) {
this.origin = page;
}
@Override
public byte[] html() throws IOException {
if (this.cache.get() == null) {
this.cache.set(this.origin.html());
}
return this.cache.get();
}
}Now, our code should look like this (pay attention, I’m now using OncePage):
Page page = new NeverEmptyPage(
new OncePage(
new DefaultPage("http://www.google.com")
)
)
String html = new AlwaysTextPage(
new TextPage(page, "ISO_8859_1")
"UTF-8"
).html();This is probably the most code-intensive post on this site so far, but I hope it’s readable and I managed to convey the idea. Now we have five classes, each of which is rather small, easy to read and easy to reuse.
Just follow the rule: never make classes configurable!
"/>
https://www.yegor256.com/2016/04/19/object-must-not-be-configurable.html
Object Behavior Must Not Be Configurable
- New York, NY
- Yegor Bugayenko
- comments
Using object properties as configuration parameters is a very common mistake we keep making mostly because our objects are mutable—we configure them. We change their behavior by injecting parameters or even entire settings/configuration objects into them. Do I have to say that it’s abusive and disrespectful from a philosophical point of view? I can, but let’s take a look at it from a practical perspective.

Let’s say there is a class that is supposed to read a web page and return its content:
class Page {
private final String uri;
Page(final String address) {
this.uri = address;
}
public String html() throws IOException {
return IOUtils.toString(
new URL(this.uri).openStream(),
"UTF-8"
);
}
}Looks simple and straight-forward, right? Yes, it’s a rather cohesive and solid class. Here is how we use it to read the content of Google front page:
String html = new Page("http://www.google.com").html();Everything is fine until we start making this class more powerful. Let’s say we want to configure the encoding. We don’t always want to use "UTF-8". We want it to be configurable. Here is what we do:
class Page {
private final String uri;
private final String encoding;
Page(final String address, final String enc) {
this.uri = address;
this.encoding = enc;
}
public String html() throws IOException {
return IOUtils.toString(
new URL(this.uri).openStream(),
this.encoding
);
}
}Done, the encoding is encapsulated and configurable. Now, let’s say we want to change the behavior of the class for the situation of an empty page. If an empty page is loaded, we want to return "<html/>". But not always. We want this to be configurable. Here is what we do:
class Page {
private final String uri;
private final String encoding;
private final boolean alwaysHtml;
Page(final String address, final String enc,
final boolean always) {
this.uri = address;
this.encoding = enc;
this.alwaysHtml = always;
}
public String html() throws IOException {
String html = IOUtils.toString(
new URL(this.uri).openStream(),
this.encoding
);
if (html.isEmpty() && this.alwaysHtml) {
html = "<html/>";
}
return html;
}
}The class is getting bigger, huh? It’s great, we’re good programmers and our code must be complex, right? The more complex it is, the better programmers we are! I’m being sarcastic. Definitely not! But let’s move on. Now we want our class to proceed anyway, even if the encoding is not supported on the current platform:
class Page {
private final String uri;
private final String encoding;
private final boolean alwaysHtml;
private final boolean encodeAnyway;
Page(final String address, final String enc,
final boolean always, final boolean encode) {
this.uri = address;
this.encoding = enc;
this.alwaysHtml = always;
this.encodeAnyway = encode;
}
public String html() throws IOException,
UnsupportedEncodingException {
final byte[] bytes = IOUtils.toByteArray(
new URL(this.uri).openStream()
);
String html;
try {
html = new String(bytes, this.encoding);
} catch (UnsupportedEncodingException ex) {
if (!this.encodeAnyway) {
throw ex;
}
html = new String(bytes, "UTF-8")
}
if (html.isEmpty() && this.alwaysHtml) {
html = "<html/>";
}
return html;
}
}The class is growing and becoming more and more powerful! Now it’s time to introduce a new class, which we will call PageSettings:
class Page {
private final String uri;
private final PageSettings settings;
Page(final String address, final PageSettings stts) {
this.uri = address;
this.settings = stts;
}
public String html() throws IOException {
final byte[] bytes = IOUtils.toByteArray(
new URL(this.uri).openStream()
);
String html;
try {
html = new String(bytes, this.settings.getEncoding());
} catch (UnsupportedEncodingException ex) {
if (!this.settings.isEncodeAnyway()) {
throw ex;
}
html = new String(bytes, "UTF-8")
}
if (html.isEmpty() && this.settings.isAlwaysHtml()) {
html = "<html/>";
}
return html;
}
}Class PageSettings is basically a holder of parameters, without any behavior. It has getters, which give us access to the parameters: isEncodeAnyway(), isAlwaysHtml(), and getEncoding(). If we keep going in this direction, there could be a few dozen configuration settings in that class. This may look very convenient and is a very typical pattern in Java world. For example, look at JobConf from Hadoop. This is how we will call our highly configurable Page (I’m assuming PageSettings is immutable):
String html = new Page(
"http://www.google.com",
new PageSettings()
.withEncoding("ISO_8859_1")
.withAlwaysHtml(true)
.withEncodeAnyway(false)
).html();However, no matter how convenient it may look at first glance, this approach is very wrong. Mostly because it encourages us to make big and non-cohesive objects. They grow in size and become less testable, less maintainable and less readable.
To prevent that from happening, I would suggest a simple rule here: object behavior should not be configurable. Or, more technically, encapsulated properties must not be used to change the behavior of an object.
Object properties are there only to coordinate the location of a real-world entity, which the object is representing. The uri is the coordinate, while the alwaysHtml boolean property is a behavior changing trigger. See the difference?
So, what should we do instead? What is the right design? We must use composable decorators. Here is how:
Page page = new NeverEmptyPage(
new DefaultPage("http://www.google.com")
)
String html = new AlwaysTextPage(
new TextPage(page, "ISO_8859_1")
page
).html();Here is how our DefaultPage would look (yes, I had to change its design a bit):
class DefaultPage implements Page {
private final String uri;
DefaultPage(final String address) {
this.uri = address;
}
@Override
public byte[] html() throws IOException {
return IOUtils.toByteArray(
new URL(this.uri).openStream()
);
}
}As you see, I’m making it implement interface Page. Now TextPage decorator, which converts an array of bytes to a text using provided encoding:
class TextPage {
private final Page origin;
private final String encoding;
TextPage(final Page page, final String enc) {
this.origin = page;
this.encoding = enc;
}
public String html() throws IOException {
return new String(
this.origin.html(),
this.encoding
);
}
}Now the NeverEmptyPage:
class NeverEmptyPage implements Page {
private final Page origin;
NeverEmptyPage(final Page page) {
this.origin = page;
}
@Override
public byte[] html() throws IOException {
byte[] bytes = this.origin.html();
if (bytes.length == 0) {
bytes = "<html/>".getBytes();
}
return bytes;
}
}And finally the AlwaysTextPage:
class AlwaysTextPage {
private final TextPage origin;
private final Page source;
AlwaysTextPage(final TextPage page, final Page src) {
this.origin = page;
this.source = src;
}
public String html() throws IOException {
String html;
try {
html = this.origin.html();
} catch (UnsupportedEncodingException ex) {
html = new TextPage(this.source, "UTF-8").html();
}
return html;
}
}You may say that AlwaysTextPage will make two calls to the encapsulated origin, in case of an unsupported encoding, which will lead to a duplicated HTTP request. That’s true and this is by design. We don’t want this duplicated HTTP roundtrip to happen. Let’s introduce one more class, which will cache the page fetched ( not thread-safe, but it’s not important now):
class OncePage implements Page {
private final Page origin;
private final AtomicReference<byte[]> cache =
new AtomicReference<>;
OncePage(final Page page) {
this.origin = page;
}
@Override
public byte[] html() throws IOException {
if (this.cache.get() == null) {
this.cache.set(this.origin.html());
}
return this.cache.get();
}
}Now, our code should look like this (pay attention, I’m now using OncePage):
Page page = new NeverEmptyPage(
new OncePage(
new DefaultPage("http://www.google.com")
)
)
String html = new AlwaysTextPage(
new TextPage(page, "ISO_8859_1")
"UTF-8"
).html();This is probably the most code-intensive post on this site so far, but I hope it’s readable and I managed to convey the idea. Now we have five classes, each of which is rather small, easy to read and easy to reuse.
Just follow the rule: never make classes configurable!
Using object properties as configuration parameters is a very common mistake we keep making mostly because our objects are mutable—we configure them. We change their behavior by injecting parameters or even entire settings/configuration objects into them. Do I have to say that it’s abusive and disrespectful from a philosophical point of view? I can, but let’s take a look at it from a practical perspective.

Let’s say there is a class that is supposed to read a web page and return its content:
class Page {
private final String uri;
Page(final String address) {
this.uri = address;
}
public String html() throws IOException {
return IOUtils.toString(
new URL(this.uri).openStream(),
"UTF-8"
);
}
}Looks simple and straight-forward, right? Yes, it’s a rather cohesive and solid class. Here is how we use it to read the content of Google front page:
String html = new Page("http://www.google.com").html();Everything is fine until we start making this class more powerful. Let’s say we want to configure the encoding. We don’t always want to use "UTF-8". We want it to be configurable. Here is what we do:
class Page {
private final String uri;
private final String encoding;
Page(final String address, final String enc) {
this.uri = address;
this.encoding = enc;
}
public String html() throws IOException {
return IOUtils.toString(
new URL(this.uri).openStream(),
this.encoding
);
}
}Done, the encoding is encapsulated and configurable. Now, let’s say we want to change the behavior of the class for the situation of an empty page. If an empty page is loaded, we want to return "<html/>". But not always. We want this to be configurable. Here is what we do:
class Page {
private final String uri;
private final String encoding;
private final boolean alwaysHtml;
Page(final String address, final String enc,
final boolean always) {
this.uri = address;
this.encoding = enc;
this.alwaysHtml = always;
}
public String html() throws IOException {
String html = IOUtils.toString(
new URL(this.uri).openStream(),
this.encoding
);
if (html.isEmpty() && this.alwaysHtml) {
html = "<html/>";
}
return html;
}
}The class is getting bigger, huh? It’s great, we’re good programmers and our code must be complex, right? The more complex it is, the better programmers we are! I’m being sarcastic. Definitely not! But let’s move on. Now we want our class to proceed anyway, even if the encoding is not supported on the current platform:
class Page {
private final String uri;
private final String encoding;
private final boolean alwaysHtml;
private final boolean encodeAnyway;
Page(final String address, final String enc,
final boolean always, final boolean encode) {
this.uri = address;
this.encoding = enc;
this.alwaysHtml = always;
this.encodeAnyway = encode;
}
public String html() throws IOException,
UnsupportedEncodingException {
final byte[] bytes = IOUtils.toByteArray(
new URL(this.uri).openStream()
);
String html;
try {
html = new String(bytes, this.encoding);
} catch (UnsupportedEncodingException ex) {
if (!this.encodeAnyway) {
throw ex;
}
html = new String(bytes, "UTF-8")
}
if (html.isEmpty() && this.alwaysHtml) {
html = "<html/>";
}
return html;
}
}The class is growing and becoming more and more powerful! Now it’s time to introduce a new class, which we will call PageSettings:
class Page {
private final String uri;
private final PageSettings settings;
Page(final String address, final PageSettings stts) {
this.uri = address;
this.settings = stts;
}
public String html() throws IOException {
final byte[] bytes = IOUtils.toByteArray(
new URL(this.uri).openStream()
);
String html;
try {
html = new String(bytes, this.settings.getEncoding());
} catch (UnsupportedEncodingException ex) {
if (!this.settings.isEncodeAnyway()) {
throw ex;
}
html = new String(bytes, "UTF-8")
}
if (html.isEmpty() && this.settings.isAlwaysHtml()) {
html = "<html/>";
}
return html;
}
}Class PageSettings is basically a holder of parameters, without any behavior. It has getters, which give us access to the parameters: isEncodeAnyway(), isAlwaysHtml(), and getEncoding(). If we keep going in this direction, there could be a few dozen configuration settings in that class. This may look very convenient and is a very typical pattern in Java world. For example, look at JobConf from Hadoop. This is how we will call our highly configurable Page (I’m assuming PageSettings is immutable):
String html = new Page(
"http://www.google.com",
new PageSettings()
.withEncoding("ISO_8859_1")
.withAlwaysHtml(true)
.withEncodeAnyway(false)
).html();However, no matter how convenient it may look at first glance, this approach is very wrong. Mostly because it encourages us to make big and non-cohesive objects. They grow in size and become less testable, less maintainable and less readable.
To prevent that from happening, I would suggest a simple rule here: object behavior should not be configurable. Or, more technically, encapsulated properties must not be used to change the behavior of an object.
Object properties are there only to coordinate the location of a real-world entity, which the object is representing. The uri is the coordinate, while the alwaysHtml boolean property is a behavior changing trigger. See the difference?
So, what should we do instead? What is the right design? We must use composable decorators. Here is how:
Page page = new NeverEmptyPage(
new DefaultPage("http://www.google.com")
)
String html = new AlwaysTextPage(
new TextPage(page, "ISO_8859_1")
page
).html();Here is how our DefaultPage would look (yes, I had to change its design a bit):
class DefaultPage implements Page {
private final String uri;
DefaultPage(final String address) {
this.uri = address;
}
@Override
public byte[] html() throws IOException {
return IOUtils.toByteArray(
new URL(this.uri).openStream()
);
}
}As you see, I’m making it implement interface Page. Now TextPage decorator, which converts an array of bytes to a text using provided encoding:
class TextPage {
private final Page origin;
private final String encoding;
TextPage(final Page page, final String enc) {
this.origin = page;
this.encoding = enc;
}
public String html() throws IOException {
return new String(
this.origin.html(),
this.encoding
);
}
}Now the NeverEmptyPage:
class NeverEmptyPage implements Page {
private final Page origin;
NeverEmptyPage(final Page page) {
this.origin = page;
}
@Override
public byte[] html() throws IOException {
byte[] bytes = this.origin.html();
if (bytes.length == 0) {
bytes = "<html/>".getBytes();
}
return bytes;
}
}And finally the AlwaysTextPage:
class AlwaysTextPage {
private final TextPage origin;
private final Page source;
AlwaysTextPage(final TextPage page, final Page src) {
this.origin = page;
this.source = src;
}
public String html() throws IOException {
String html;
try {
html = this.origin.html();
} catch (UnsupportedEncodingException ex) {
html = new TextPage(this.source, "UTF-8").html();
}
return html;
}
}You may say that AlwaysTextPage will make two calls to the encapsulated origin, in case of an unsupported encoding, which will lead to a duplicated HTTP request. That’s true and this is by design. We don’t want this duplicated HTTP roundtrip to happen. Let’s introduce one more class, which will cache the page fetched ( not thread-safe, but it’s not important now):
class OncePage implements Page {
private final Page origin;
private final AtomicReference<byte[]> cache =
new AtomicReference<>;
OncePage(final Page page) {
this.origin = page;
}
@Override
public byte[] html() throws IOException {
if (this.cache.get() == null) {
this.cache.set(this.origin.html());
}
return this.cache.get();
}
}Now, our code should look like this (pay attention, I’m now using OncePage):
Page page = new NeverEmptyPage(
new OncePage(
new DefaultPage("http://www.google.com")
)
)
String html = new AlwaysTextPage(
new TextPage(page, "ISO_8859_1")
"UTF-8"
).html();This is probably the most code-intensive post on this site so far, but I hope it’s readable and I managed to convey the idea. Now we have five classes, each of which is rather small, easy to read and easy to reuse.
Just follow the rule: never make classes configurable!
Please, use syntax highlighting in your comments, to make them more readable.

Long story short, there is one big problem with annotations—they encourage us to implement object functionality outside of an object, which is against the very principle of encapsulation. The object is not solid any more, since its behavior is not defined entirely by its own methods—some of its functionality stays elsewhere. Why is it bad? Let’s see in a few examples.
@Inject
Say we annotate a property with @Inject:
import javax.inject.Inject;
public class Books {
@Inject
private final DB db;
// some methods here, which use this.db
}Then we have an injector that knows what to inject:
Injector injector = Guice.createInjector(
new AbstractModule() {
@Override
public void configure() {
this.bind(DB.class).toInstance(
new Postgres("jdbc:postgresql:5740/main")
);
}
}
);Now we’re making an instance of class Books via the container:
Books books = injector.getInstance(Books.class);The class Books has no idea how and who will inject an instance of class DB into it. This will happen behind the scenes and outside of its control. The injection will do it. It may look convenient, but this attitude causes a lot of damage to the entire code base. The control is lost (not inverted, but lost!). The object is not in charge any more. It can’t be responsible for what’s happening to it.
Instead, here is how this should be done:
class Books {
private final DB db;
Books(final DB base) {
this.db = base;
}
// some methods here, which use this.db
}This article explains why Dependency Injection containers are a wrong idea in the first place: Dependency Injection Containers are Code Polluters. Annotations basically provoke us to make the containers and use them. We move functionality outside of our objects and put it into containers, or somewhere else. That’s because we don’t want to duplicate the same code over and over again, right? That’s correct, duplication is bad, but tearing an object apart is even worse. Way worse. The same is true about ORM (JPA/Hibernate), where annotations are being actively used. Check this post, it explains what is wrong about ORM: ORM Is an Offensive Anti-Pattern. Annotations by themselves are not the key motivator, but they help us and encourage us by tearing objects apart and keeping parts in different places. They are containers, sessions, managers, controllers, etc.
@XmlElement
This is how JAXB works, when you want to convert your POJO to XML. First, you attach the @XmlElement annotation to the getter:
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;
@XmlRootElement
public class Book {
private final String title;
public Book(final String title) {
this.title = title;
}
@XmlElement
public String getTitle() {
return this.title;
}
}Then, you create a marshaller and ask it to convert an instance of class Book into XML:
final Book book = new Book("0132350882", "Clean Code");
final JAXBContext ctx = JAXBContext.newInstance(Book.class);
final Marshaller marshaller = ctx.createMarshaller();
marshaller.marshal(book, System.out);Who is creating the XML? Not the book. Someone else, outside of the class Book. This is very wrong. Instead, this is how this should have been done. First, the class that has no idea about XML:
class DefaultBook implements Book {
private final String title;
DefaultBook(final String title) {
this.title = title;
}
@Override
public String getTitle() {
return this.title;
}
}Then, the decorator that prints it to the XML:
class XmlBook implements Book{
private final Book origin;
XmlBook(final Book book) {
this.origin = book;
}
@Override
public String getTitle() {
return this.origin.getTitle();
}
public String toXML() {
return String.format(
"<book><title>%s</title></book>",
this.getTitle()
);
}
}Now, in order to print the book in XML we do the following:
String xml = new XmlBook(
new DefaultBook("Elegant Objects")
).toXML();The XML printing functionality is inside XmlBook. If you don’t like the decorator idea, you can move the toXML() method to the DefaultBook class. It’s not important. What is important is that the functionality always stays where it belongs—inside the object. Only the object knows how to print itself to the XML. Nobody else!
@RetryOnFailure
Here is an example (from my own library):
import com.jcabi.aspects.RetryOnFailure;
class Foo {
@RetryOnFailure
public String load(URL url) {
return url.openConnection().getContent();
}
}After compilation, we run a so called AOP weaver that technically turns our code into something like this:
class Foo {
public String load(URL url) {
while (true) {
try {
return _Foo.load(url);
} catch (Exception ex) {
// ignore it
}
}
}
class _Foo {
public String load(URL url) {
return url.openConnection().getContent();
}
}
}I simplified the actual algorithm of retrying a method call on failure, but I’m sure you get the idea. AspectJ, the AOP engine, uses @RetryOnFailure annotation as a signal, informing us that the class has to be wrapped into another one. This is happening behind the scenes. We don’t see that supplementary class, which implements the retrying algorithm. But the bytecode produced by the AspectJ weaver contains a modified version of class Foo.
That is exactly what is wrong with this approach—we don’t see and don’t control the instantiation of that supplementary object. Object composition, which is the most important process in object design, is hidden somewhere behind the scenes. You may say that we don’t need to see it since it’s supplementary. I disagree. We must see how our objects are composed. We may not care about how they work, but we must see the entire composition process.
A much better design would look like this (instead of annotations):
Foo foo = new FooThatRetries(new Foo());And then, the implementation of FooThatRetries:
class FooThatRetries implements Foo {
private final Foo origin;
FooThatRetries(Foo foo) {
this.origin = foo;
}
public String load(URL url) {
return new Retry().eval(
new Retry.Algorithm<String>() {
@Override
public String eval() {
return FooThatRetries.this.load(url);
}
}
);
}
}And now, the implementation of Retry:
class Retry {
public <T> T eval(Retry.Algorithm<T> algo) {
while (true) {
try {
return algo.eval();
} catch (Exception ex) {
// ignore it
}
}
}
interface Algorithm<T> {
T eval();
}
}Is the code longer? Yes. Is it cleaner? A lot more. I regret that I didn’t understand it two years ago, when I started to work with jcabi-aspects.
The bottom line is that annotations are bad. Don’t use them. What should be used instead? Object composition.
What could be worse than annotations? Configurations. For example, XML configurations. Spring XML configuration mechanisms is a perfect example of terrible design. I’ve said it many times before. Let me repeat it again—Spring Framework is one of the worst software products in the Java world. If you can stay away from it, you will do yourself a big favor.
There should not be any “configurations” in OOP. We can’t configure our objects if they are real objects. We can only instantiate them. And the best method of instantiation is operator new. This operator is the key instrument for an OOP developer. Taking it away from us and giving us “configuration mechanisms” is an unforgivable crime.

Long story short, there is one big problem with annotations—they encourage us to implement object functionality outside of an object, which is against the very principle of encapsulation. The object is not solid any more, since its behavior is not defined entirely by its own methods—some of its functionality stays elsewhere. Why is it bad? Let’s see in a few examples.
@Inject
Say we annotate a property with @Inject:
import javax.inject.Inject;
public class Books {
@Inject
private final DB db;
// some methods here, which use this.db
}Then we have an injector that knows what to inject:
Injector injector = Guice.createInjector(
new AbstractModule() {
@Override
public void configure() {
this.bind(DB.class).toInstance(
new Postgres("jdbc:postgresql:5740/main")
);
}
}
);Now we’re making an instance of class Books via the container:
Books books = injector.getInstance(Books.class);The class Books has no idea how and who will inject an instance of class DB into it. This will happen behind the scenes and outside of its control. The injection will do it. It may look convenient, but this attitude causes a lot of damage to the entire code base. The control is lost (not inverted, but lost!). The object is not in charge any more. It can’t be responsible for what’s happening to it.
Instead, here is how this should be done:
class Books {
private final DB db;
Books(final DB base) {
this.db = base;
}
// some methods here, which use this.db
}This article explains why Dependency Injection containers are a wrong idea in the first place: Dependency Injection Containers are Code Polluters. Annotations basically provoke us to make the containers and use them. We move functionality outside of our objects and put it into containers, or somewhere else. That’s because we don’t want to duplicate the same code over and over again, right? That’s correct, duplication is bad, but tearing an object apart is even worse. Way worse. The same is true about ORM (JPA/Hibernate), where annotations are being actively used. Check this post, it explains what is wrong about ORM: ORM Is an Offensive Anti-Pattern. Annotations by themselves are not the key motivator, but they help us and encourage us by tearing objects apart and keeping parts in different places. They are containers, sessions, managers, controllers, etc.
@XmlElement
This is how JAXB works, when you want to convert your POJO to XML. First, you attach the @XmlElement annotation to the getter:
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;
@XmlRootElement
public class Book {
private final String title;
public Book(final String title) {
this.title = title;
}
@XmlElement
public String getTitle() {
return this.title;
}
}Then, you create a marshaller and ask it to convert an instance of class Book into XML:
final Book book = new Book("0132350882", "Clean Code");
final JAXBContext ctx = JAXBContext.newInstance(Book.class);
final Marshaller marshaller = ctx.createMarshaller();
marshaller.marshal(book, System.out);Who is creating the XML? Not the book. Someone else, outside of the class Book. This is very wrong. Instead, this is how this should have been done. First, the class that has no idea about XML:
class DefaultBook implements Book {
private final String title;
DefaultBook(final String title) {
this.title = title;
}
@Override
public String getTitle() {
return this.title;
}
}Then, the decorator that prints it to the XML:
class XmlBook implements Book{
private final Book origin;
XmlBook(final Book book) {
this.origin = book;
}
@Override
public String getTitle() {
return this.origin.getTitle();
}
public String toXML() {
return String.format(
"<book><title>%s</title></book>",
this.getTitle()
);
}
}Now, in order to print the book in XML we do the following:
String xml = new XmlBook(
new DefaultBook("Elegant Objects")
).toXML();The XML printing functionality is inside XmlBook. If you don’t like the decorator idea, you can move the toXML() method to the DefaultBook class. It’s not important. What is important is that the functionality always stays where it belongs—inside the object. Only the object knows how to print itself to the XML. Nobody else!
@RetryOnFailure
Here is an example (from my own library):
import com.jcabi.aspects.RetryOnFailure;
class Foo {
@RetryOnFailure
public String load(URL url) {
return url.openConnection().getContent();
}
}After compilation, we run a so called AOP weaver that technically turns our code into something like this:
class Foo {
public String load(URL url) {
while (true) {
try {
return _Foo.load(url);
} catch (Exception ex) {
// ignore it
}
}
}
class _Foo {
public String load(URL url) {
return url.openConnection().getContent();
}
}
}I simplified the actual algorithm of retrying a method call on failure, but I’m sure you get the idea. AspectJ, the AOP engine, uses @RetryOnFailure annotation as a signal, informing us that the class has to be wrapped into another one. This is happening behind the scenes. We don’t see that supplementary class, which implements the retrying algorithm. But the bytecode produced by the AspectJ weaver contains a modified version of class Foo.
That is exactly what is wrong with this approach—we don’t see and don’t control the instantiation of that supplementary object. Object composition, which is the most important process in object design, is hidden somewhere behind the scenes. You may say that we don’t need to see it since it’s supplementary. I disagree. We must see how our objects are composed. We may not care about how they work, but we must see the entire composition process.
A much better design would look like this (instead of annotations):
Foo foo = new FooThatRetries(new Foo());And then, the implementation of FooThatRetries:
class FooThatRetries implements Foo {
private final Foo origin;
FooThatRetries(Foo foo) {
this.origin = foo;
}
public String load(URL url) {
return new Retry().eval(
new Retry.Algorithm<String>() {
@Override
public String eval() {
return FooThatRetries.this.load(url);
}
}
);
}
}And now, the implementation of Retry:
class Retry {
public <T> T eval(Retry.Algorithm<T> algo) {
while (true) {
try {
return algo.eval();
} catch (Exception ex) {
// ignore it
}
}
}
interface Algorithm<T> {
T eval();
}
}Is the code longer? Yes. Is it cleaner? A lot more. I regret that I didn’t understand it two years ago, when I started to work with jcabi-aspects.
The bottom line is that annotations are bad. Don’t use them. What should be used instead? Object composition.
What could be worse than annotations? Configurations. For example, XML configurations. Spring XML configuration mechanisms is a perfect example of terrible design. I’ve said it many times before. Let me repeat it again—Spring Framework is one of the worst software products in the Java world. If you can stay away from it, you will do yourself a big favor.
There should not be any “configurations” in OOP. We can’t configure our objects if they are real objects. We can only instantiate them. And the best method of instantiation is operator new. This operator is the key instrument for an OOP developer. Taking it away from us and giving us “configuration mechanisms” is an unforgivable crime.
https://www.yegor256.com/2016/04/12/java-annotations-are-evil.html
Java Annotations Are a Big Mistake
- Seattle, WA
- Yegor Bugayenko
- comments
Annotations were introduced in Java 5, and we all got excited. Such a great instrument to make code shorter! No more Hibernate/Spring XML configuration files! Just annotations, right there in the code where we need them. No more marker interfaces, just a runtime-retained reflection-discoverable annotation! I was excited too. Moreover, I’ve made a few open source libraries which use annotations heavily. Take jcabi-aspects, for example. However, I’m not excited any more. Moreover, I believe that annotations are a big mistake in Java design.

Long story short, there is one big problem with annotations—they encourage us to implement object functionality outside of an object, which is against the very principle of encapsulation. The object is not solid any more, since its behavior is not defined entirely by its own methods—some of its functionality stays elsewhere. Why is it bad? Let’s see in a few examples.
@Inject
Say we annotate a property with @Inject:
import javax.inject.Inject;
public class Books {
@Inject
private final DB db;
// some methods here, which use this.db
}Then we have an injector that knows what to inject:
Injector injector = Guice.createInjector(
new AbstractModule() {
@Override
public void configure() {
this.bind(DB.class).toInstance(
new Postgres("jdbc:postgresql:5740/main")
);
}
}
);Now we’re making an instance of class Books via the container:
Books books = injector.getInstance(Books.class);The class Books has no idea how and who will inject an instance of class DB into it. This will happen behind the scenes and outside of its control. The injection will do it. It may look convenient, but this attitude causes a lot of damage to the entire code base. The control is lost (not inverted, but lost!). The object is not in charge any more. It can’t be responsible for what’s happening to it.
Instead, here is how this should be done:
class Books {
private final DB db;
Books(final DB base) {
this.db = base;
}
// some methods here, which use this.db
}This article explains why Dependency Injection containers are a wrong idea in the first place: Dependency Injection Containers are Code Polluters. Annotations basically provoke us to make the containers and use them. We move functionality outside of our objects and put it into containers, or somewhere else. That’s because we don’t want to duplicate the same code over and over again, right? That’s correct, duplication is bad, but tearing an object apart is even worse. Way worse. The same is true about ORM (JPA/Hibernate), where annotations are being actively used. Check this post, it explains what is wrong about ORM: ORM Is an Offensive Anti-Pattern. Annotations by themselves are not the key motivator, but they help us and encourage us by tearing objects apart and keeping parts in different places. They are containers, sessions, managers, controllers, etc.
@XmlElement
This is how JAXB works, when you want to convert your POJO to XML. First, you attach the @XmlElement annotation to the getter:
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;
@XmlRootElement
public class Book {
private final String title;
public Book(final String title) {
this.title = title;
}
@XmlElement
public String getTitle() {
return this.title;
}
}Then, you create a marshaller and ask it to convert an instance of class Book into XML:
final Book book = new Book("0132350882", "Clean Code");
final JAXBContext ctx = JAXBContext.newInstance(Book.class);
final Marshaller marshaller = ctx.createMarshaller();
marshaller.marshal(book, System.out);Who is creating the XML? Not the book. Someone else, outside of the class Book. This is very wrong. Instead, this is how this should have been done. First, the class that has no idea about XML:
class DefaultBook implements Book {
private final String title;
DefaultBook(final String title) {
this.title = title;
}
@Override
public String getTitle() {
return this.title;
}
}Then, the decorator that prints it to the XML:
class XmlBook implements Book{
private final Book origin;
XmlBook(final Book book) {
this.origin = book;
}
@Override
public String getTitle() {
return this.origin.getTitle();
}
public String toXML() {
return String.format(
"<book><title>%s</title></book>",
this.getTitle()
);
}
}Now, in order to print the book in XML we do the following:
String xml = new XmlBook(
new DefaultBook("Elegant Objects")
).toXML();The XML printing functionality is inside XmlBook. If you don’t like the decorator idea, you can move the toXML() method to the DefaultBook class. It’s not important. What is important is that the functionality always stays where it belongs—inside the object. Only the object knows how to print itself to the XML. Nobody else!
@RetryOnFailure
Here is an example (from my own library):
import com.jcabi.aspects.RetryOnFailure;
class Foo {
@RetryOnFailure
public String load(URL url) {
return url.openConnection().getContent();
}
}After compilation, we run a so called AOP weaver that technically turns our code into something like this:
class Foo {
public String load(URL url) {
while (true) {
try {
return _Foo.load(url);
} catch (Exception ex) {
// ignore it
}
}
}
class _Foo {
public String load(URL url) {
return url.openConnection().getContent();
}
}
}I simplified the actual algorithm of retrying a method call on failure, but I’m sure you get the idea. AspectJ, the AOP engine, uses @RetryOnFailure annotation as a signal, informing us that the class has to be wrapped into another one. This is happening behind the scenes. We don’t see that supplementary class, which implements the retrying algorithm. But the bytecode produced by the AspectJ weaver contains a modified version of class Foo.
That is exactly what is wrong with this approach—we don’t see and don’t control the instantiation of that supplementary object. Object composition, which is the most important process in object design, is hidden somewhere behind the scenes. You may say that we don’t need to see it since it’s supplementary. I disagree. We must see how our objects are composed. We may not care about how they work, but we must see the entire composition process.
A much better design would look like this (instead of annotations):
Foo foo = new FooThatRetries(new Foo());And then, the implementation of FooThatRetries:
class FooThatRetries implements Foo {
private final Foo origin;
FooThatRetries(Foo foo) {
this.origin = foo;
}
public String load(URL url) {
return new Retry().eval(
new Retry.Algorithm<String>() {
@Override
public String eval() {
return FooThatRetries.this.load(url);
}
}
);
}
}And now, the implementation of Retry:
class Retry {
public <T> T eval(Retry.Algorithm<T> algo) {
while (true) {
try {
return algo.eval();
} catch (Exception ex) {
// ignore it
}
}
}
interface Algorithm<T> {
T eval();
}
}Is the code longer? Yes. Is it cleaner? A lot more. I regret that I didn’t understand it two years ago, when I started to work with jcabi-aspects.
The bottom line is that annotations are bad. Don’t use them. What should be used instead? Object composition.
What could be worse than annotations? Configurations. For example, XML configurations. Spring XML configuration mechanisms is a perfect example of terrible design. I’ve said it many times before. Let me repeat it again—Spring Framework is one of the worst software products in the Java world. If you can stay away from it, you will do yourself a big favor.
There should not be any “configurations” in OOP. We can’t configure our objects if they are real objects. We can only instantiate them. And the best method of instantiation is operator new. This operator is the key instrument for an OOP developer. Taking it away from us and giving us “configuration mechanisms” is an unforgivable crime.
Annotations were introduced in Java 5, and we all got excited. Such a great instrument to make code shorter! No more Hibernate/Spring XML configuration files! Just annotations, right there in the code where we need them. No more marker interfaces, just a runtime-retained reflection-discoverable annotation! I was excited too. Moreover, I’ve made a few open source libraries which use annotations heavily. Take jcabi-aspects, for example. However, I’m not excited any more. Moreover, I believe that annotations are a big mistake in Java design.

Long story short, there is one big problem with annotations—they encourage us to implement object functionality outside of an object, which is against the very principle of encapsulation. The object is not solid any more, since its behavior is not defined entirely by its own methods—some of its functionality stays elsewhere. Why is it bad? Let’s see in a few examples.
@Inject
Say we annotate a property with @Inject:
import javax.inject.Inject;
public class Books {
@Inject
private final DB db;
// some methods here, which use this.db
}Then we have an injector that knows what to inject:
Injector injector = Guice.createInjector(
new AbstractModule() {
@Override
public void configure() {
this.bind(DB.class).toInstance(
new Postgres("jdbc:postgresql:5740/main")
);
}
}
);Now we’re making an instance of class Books via the container:
Books books = injector.getInstance(Books.class);The class Books has no idea how and who will inject an instance of class DB into it. This will happen behind the scenes and outside of its control. The injection will do it. It may look convenient, but this attitude causes a lot of damage to the entire code base. The control is lost (not inverted, but lost!). The object is not in charge any more. It can’t be responsible for what’s happening to it.
Instead, here is how this should be done:
class Books {
private final DB db;
Books(final DB base) {
this.db = base;
}
// some methods here, which use this.db
}This article explains why Dependency Injection containers are a wrong idea in the first place: Dependency Injection Containers are Code Polluters. Annotations basically provoke us to make the containers and use them. We move functionality outside of our objects and put it into containers, or somewhere else. That’s because we don’t want to duplicate the same code over and over again, right? That’s correct, duplication is bad, but tearing an object apart is even worse. Way worse. The same is true about ORM (JPA/Hibernate), where annotations are being actively used. Check this post, it explains what is wrong about ORM: ORM Is an Offensive Anti-Pattern. Annotations by themselves are not the key motivator, but they help us and encourage us by tearing objects apart and keeping parts in different places. They are containers, sessions, managers, controllers, etc.
@XmlElement
This is how JAXB works, when you want to convert your POJO to XML. First, you attach the @XmlElement annotation to the getter:
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;
@XmlRootElement
public class Book {
private final String title;
public Book(final String title) {
this.title = title;
}
@XmlElement
public String getTitle() {
return this.title;
}
}Then, you create a marshaller and ask it to convert an instance of class Book into XML:
final Book book = new Book("0132350882", "Clean Code");
final JAXBContext ctx = JAXBContext.newInstance(Book.class);
final Marshaller marshaller = ctx.createMarshaller();
marshaller.marshal(book, System.out);Who is creating the XML? Not the book. Someone else, outside of the class Book. This is very wrong. Instead, this is how this should have been done. First, the class that has no idea about XML:
class DefaultBook implements Book {
private final String title;
DefaultBook(final String title) {
this.title = title;
}
@Override
public String getTitle() {
return this.title;
}
}Then, the decorator that prints it to the XML:
class XmlBook implements Book{
private final Book origin;
XmlBook(final Book book) {
this.origin = book;
}
@Override
public String getTitle() {
return this.origin.getTitle();
}
public String toXML() {
return String.format(
"<book><title>%s</title></book>",
this.getTitle()
);
}
}Now, in order to print the book in XML we do the following:
String xml = new XmlBook(
new DefaultBook("Elegant Objects")
).toXML();The XML printing functionality is inside XmlBook. If you don’t like the decorator idea, you can move the toXML() method to the DefaultBook class. It’s not important. What is important is that the functionality always stays where it belongs—inside the object. Only the object knows how to print itself to the XML. Nobody else!
@RetryOnFailure
Here is an example (from my own library):
import com.jcabi.aspects.RetryOnFailure;
class Foo {
@RetryOnFailure
public String load(URL url) {
return url.openConnection().getContent();
}
}After compilation, we run a so called AOP weaver that technically turns our code into something like this:
class Foo {
public String load(URL url) {
while (true) {
try {
return _Foo.load(url);
} catch (Exception ex) {
// ignore it
}
}
}
class _Foo {
public String load(URL url) {
return url.openConnection().getContent();
}
}
}I simplified the actual algorithm of retrying a method call on failure, but I’m sure you get the idea. AspectJ, the AOP engine, uses @RetryOnFailure annotation as a signal, informing us that the class has to be wrapped into another one. This is happening behind the scenes. We don’t see that supplementary class, which implements the retrying algorithm. But the bytecode produced by the AspectJ weaver contains a modified version of class Foo.
That is exactly what is wrong with this approach—we don’t see and don’t control the instantiation of that supplementary object. Object composition, which is the most important process in object design, is hidden somewhere behind the scenes. You may say that we don’t need to see it since it’s supplementary. I disagree. We must see how our objects are composed. We may not care about how they work, but we must see the entire composition process.
A much better design would look like this (instead of annotations):
Foo foo = new FooThatRetries(new Foo());And then, the implementation of FooThatRetries:
class FooThatRetries implements Foo {
private final Foo origin;
FooThatRetries(Foo foo) {
this.origin = foo;
}
public String load(URL url) {
return new Retry().eval(
new Retry.Algorithm<String>() {
@Override
public String eval() {
return FooThatRetries.this.load(url);
}
}
);
}
}And now, the implementation of Retry:
class Retry {
public <T> T eval(Retry.Algorithm<T> algo) {
while (true) {
try {
return algo.eval();
} catch (Exception ex) {
// ignore it
}
}
}
interface Algorithm<T> {
T eval();
}
}Is the code longer? Yes. Is it cleaner? A lot more. I regret that I didn’t understand it two years ago, when I started to work with jcabi-aspects.
The bottom line is that annotations are bad. Don’t use them. What should be used instead? Object composition.
What could be worse than annotations? Configurations. For example, XML configurations. Spring XML configuration mechanisms is a perfect example of terrible design. I’ve said it many times before. Let me repeat it again—Spring Framework is one of the worst software products in the Java world. If you can stay away from it, you will do yourself a big favor.
There should not be any “configurations” in OOP. We can’t configure our objects if they are real objects. We can only instantiate them. And the best method of instantiation is operator new. This operator is the key instrument for an OOP developer. Taking it away from us and giving us “configuration mechanisms” is an unforgivable crime.
Please, use syntax highlighting in your comments, to make them more readable.

I’m suggesting to use “printers” instead. Instead of exposing data via getters, an object will have a functionality of printing itself to some media.
Let’s say this is our class:
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
}We need it to be transferred into XML format. A more or less traditional way to do it is via getters and JAXB:
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;
@XmlRootElement
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
@XmlElement
public String getIsbn() {
return this.isbn;
}
@XmlElement
public String getTitle() {
return this.title;
}
}This is a very offensive way of treating the object. We’re basically exposing everything that’s inside to the public. It was a nice little self-sufficient solid object and we turned it into a bag of data, which anyone can access in many possible ways. We can access it for reading, of course.
It is convenient to have these getters, you may say. We are all used to them. If we want to convert it into JSON, they will be very helpful. If we want to use this poor object as a data object in JSP, getters will help us. There are many examples in Java, where getters are being actively used.
This is not because they are so effective. This is because we’re so procedural in our way of thinking. We don’t trust our objects. We only trust the data they store. We don’t want this Book object to generate the XML. We want it to give us the data. We will build the XML. The Book is too stupid to do that job. We’re way smarter!
I’m suggesting to stop thinking this way. Instead, let’s try to give this poor Book a chance, and equip it with a “printer”:
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
public String toXML() {
return String.format(
"<book><isbn>%s</isbn><title>%s</title></book>",
this.isbn, this.title
);
}
}This isn’t the best implementation, but you got the idea. The object is not exposing its internals any more. We can’t get its ISBN and its title. We can only ask it to print itself in XML format.
We can add an additional printer, if another format is required:
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
public String toJSON() {
return String.format(
"{\"isbn\":\"%s\", \"title\":\"%s\"}",
this.isbn, this.title
);
}
}Again, not the best implementation, but you see what I’m trying to show. Each time we need a new format, we create a new printer.
You may say that the object will be rather big if there will be many formats. That’s true, but a big object is a bad design in the first place. I would say that if there is more than one printer—it’s a problem.
So, what to do if we need multiple formats? Use “media,” where that printers will be able to print to. Say, we have an object that represents a record in MySQL. We want it to be printable to XML, HTML, JSON, some binary format and God knows what else. We can add that many printers to it, but the object will be big and ugly. To avoid that, introduce a new object, that represents the media where the data will be printed to:
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
public Media print(Media media) {
return media
.with("isbn", this.isbn)
.with("title", this.title);
}
}Again, it’s a very primitive design of that immutable Media class, but you got the idea—the media accepts the data. Now, we want to print our object to JSON (this design is not really perfect, since JsonObjectBuilder is not immutable, even though it looks like one…):
class JsonMedia implements Media {
private final JsonObjectBuilder builder;
JsonMedia() {
this("book");
}
JsonMedia(String head) {
this(Json.createObjectBuilder().add(head));
}
JsonMedia(JsonObjectBuilder bdr) {
this.builder = bdr;
}
@Override
public Media with(String name, String value) {
return new JsonMedia(
this.builder.add(name, value)
);
}
public JsonObject json() {
return this.builder.build();
}
}Now, we make an instance of JsonMedia and ask our book to print itself there:
JsonMedia media = new JsonMedia("book");
book.print(media);
JsonObject json = media.json();Voilà! The JSON object is ready and the book has no idea about what exactly what printed just now. We need to print the book to XML? We create XmlMedia, which will print the book to XML. The Book class stays small, while the complexity of “media” objects is unlimited.
My point here is simple—no getters, just printers!
" /> are evil. No need to argue about this, it’s settled. You disagree? Let’s discuss that later. For now, let’s say, we want to get rid of getters. The key question is how is it possible at all? We do need to get the data out of an object, right? Nope. Wrong.
I’m suggesting to use “printers” instead. Instead of exposing data via getters, an object will have a functionality of printing itself to some media.
Let’s say this is our class:
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
}We need it to be transferred into XML format. A more or less traditional way to do it is via getters and JAXB:
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;
@XmlRootElement
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
@XmlElement
public String getIsbn() {
return this.isbn;
}
@XmlElement
public String getTitle() {
return this.title;
}
}This is a very offensive way of treating the object. We’re basically exposing everything that’s inside to the public. It was a nice little self-sufficient solid object and we turned it into a bag of data, which anyone can access in many possible ways. We can access it for reading, of course.
It is convenient to have these getters, you may say. We are all used to them. If we want to convert it into JSON, they will be very helpful. If we want to use this poor object as a data object in JSP, getters will help us. There are many examples in Java, where getters are being actively used.
This is not because they are so effective. This is because we’re so procedural in our way of thinking. We don’t trust our objects. We only trust the data they store. We don’t want this Book object to generate the XML. We want it to give us the data. We will build the XML. The Book is too stupid to do that job. We’re way smarter!
I’m suggesting to stop thinking this way. Instead, let’s try to give this poor Book a chance, and equip it with a “printer”:
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
public String toXML() {
return String.format(
"<book><isbn>%s</isbn><title>%s</title></book>",
this.isbn, this.title
);
}
}This isn’t the best implementation, but you got the idea. The object is not exposing its internals any more. We can’t get its ISBN and its title. We can only ask it to print itself in XML format.
We can add an additional printer, if another format is required:
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
public String toJSON() {
return String.format(
"{\"isbn\":\"%s\", \"title\":\"%s\"}",
this.isbn, this.title
);
}
}Again, not the best implementation, but you see what I’m trying to show. Each time we need a new format, we create a new printer.
You may say that the object will be rather big if there will be many formats. That’s true, but a big object is a bad design in the first place. I would say that if there is more than one printer—it’s a problem.
So, what to do if we need multiple formats? Use “media,” where that printers will be able to print to. Say, we have an object that represents a record in MySQL. We want it to be printable to XML, HTML, JSON, some binary format and God knows what else. We can add that many printers to it, but the object will be big and ugly. To avoid that, introduce a new object, that represents the media where the data will be printed to:
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
public Media print(Media media) {
return media
.with("isbn", this.isbn)
.with("title", this.title);
}
}Again, it’s a very primitive design of that immutable Media class, but you got the idea—the media accepts the data. Now, we want to print our object to JSON (this design is not really perfect, since JsonObjectBuilder is not immutable, even though it looks like one…):
class JsonMedia implements Media {
private final JsonObjectBuilder builder;
JsonMedia() {
this("book");
}
JsonMedia(String head) {
this(Json.createObjectBuilder().add(head));
}
JsonMedia(JsonObjectBuilder bdr) {
this.builder = bdr;
}
@Override
public Media with(String name, String value) {
return new JsonMedia(
this.builder.add(name, value)
);
}
public JsonObject json() {
return this.builder.build();
}
}Now, we make an instance of JsonMedia and ask our book to print itself there:
JsonMedia media = new JsonMedia("book");
book.print(media);
JsonObject json = media.json();Voilà! The JSON object is ready and the book has no idea about what exactly what printed just now. We need to print the book to XML? We create XmlMedia, which will print the book to XML. The Book class stays small, while the complexity of “media” objects is unlimited.
My point here is simple—no getters, just printers!
"/>
https://www.yegor256.com/2016/04/05/printers-instead-of-getters.html
Printers Instead of Getters
- Palo Alto, CA
- Yegor Bugayenko
- comments
Getters and setters are evil. No need to argue about this, it’s settled. You disagree? Let’s discuss that later. For now, let’s say, we want to get rid of getters. The key question is how is it possible at all? We do need to get the data out of an object, right? Nope. Wrong.

I’m suggesting to use “printers” instead. Instead of exposing data via getters, an object will have a functionality of printing itself to some media.
Let’s say this is our class:
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
}We need it to be transferred into XML format. A more or less traditional way to do it is via getters and JAXB:
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;
@XmlRootElement
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
@XmlElement
public String getIsbn() {
return this.isbn;
}
@XmlElement
public String getTitle() {
return this.title;
}
}This is a very offensive way of treating the object. We’re basically exposing everything that’s inside to the public. It was a nice little self-sufficient solid object and we turned it into a bag of data, which anyone can access in many possible ways. We can access it for reading, of course.
It is convenient to have these getters, you may say. We are all used to them. If we want to convert it into JSON, they will be very helpful. If we want to use this poor object as a data object in JSP, getters will help us. There are many examples in Java, where getters are being actively used.
This is not because they are so effective. This is because we’re so procedural in our way of thinking. We don’t trust our objects. We only trust the data they store. We don’t want this Book object to generate the XML. We want it to give us the data. We will build the XML. The Book is too stupid to do that job. We’re way smarter!
I’m suggesting to stop thinking this way. Instead, let’s try to give this poor Book a chance, and equip it with a “printer”:
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
public String toXML() {
return String.format(
"<book><isbn>%s</isbn><title>%s</title></book>",
this.isbn, this.title
);
}
}This isn’t the best implementation, but you got the idea. The object is not exposing its internals any more. We can’t get its ISBN and its title. We can only ask it to print itself in XML format.
We can add an additional printer, if another format is required:
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
public String toJSON() {
return String.format(
"{\"isbn\":\"%s\", \"title\":\"%s\"}",
this.isbn, this.title
);
}
}Again, not the best implementation, but you see what I’m trying to show. Each time we need a new format, we create a new printer.
You may say that the object will be rather big if there will be many formats. That’s true, but a big object is a bad design in the first place. I would say that if there is more than one printer—it’s a problem.
So, what to do if we need multiple formats? Use “media,” where that printers will be able to print to. Say, we have an object that represents a record in MySQL. We want it to be printable to XML, HTML, JSON, some binary format and God knows what else. We can add that many printers to it, but the object will be big and ugly. To avoid that, introduce a new object, that represents the media where the data will be printed to:
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
public Media print(Media media) {
return media
.with("isbn", this.isbn)
.with("title", this.title);
}
}Again, it’s a very primitive design of that immutable Media class, but you got the idea—the media accepts the data. Now, we want to print our object to JSON (this design is not really perfect, since JsonObjectBuilder is not immutable, even though it looks like one…):
class JsonMedia implements Media {
private final JsonObjectBuilder builder;
JsonMedia() {
this("book");
}
JsonMedia(String head) {
this(Json.createObjectBuilder().add(head));
}
JsonMedia(JsonObjectBuilder bdr) {
this.builder = bdr;
}
@Override
public Media with(String name, String value) {
return new JsonMedia(
this.builder.add(name, value)
);
}
public JsonObject json() {
return this.builder.build();
}
}Now, we make an instance of JsonMedia and ask our book to print itself there:
JsonMedia media = new JsonMedia("book");
book.print(media);
JsonObject json = media.json();Voilà! The JSON object is ready and the book has no idea about what exactly what printed just now. We need to print the book to XML? We create XmlMedia, which will print the book to XML. The Book class stays small, while the complexity of “media” objects is unlimited.
My point here is simple—no getters, just printers!
Getters and setters are evil. No need to argue about this, it’s settled. You disagree? Let’s discuss that later. For now, let’s say, we want to get rid of getters. The key question is how is it possible at all? We do need to get the data out of an object, right? Nope. Wrong.

I’m suggesting to use “printers” instead. Instead of exposing data via getters, an object will have a functionality of printing itself to some media.
Let’s say this is our class:
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
}We need it to be transferred into XML format. A more or less traditional way to do it is via getters and JAXB:
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;
@XmlRootElement
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
@XmlElement
public String getIsbn() {
return this.isbn;
}
@XmlElement
public String getTitle() {
return this.title;
}
}This is a very offensive way of treating the object. We’re basically exposing everything that’s inside to the public. It was a nice little self-sufficient solid object and we turned it into a bag of data, which anyone can access in many possible ways. We can access it for reading, of course.
It is convenient to have these getters, you may say. We are all used to them. If we want to convert it into JSON, they will be very helpful. If we want to use this poor object as a data object in JSP, getters will help us. There are many examples in Java, where getters are being actively used.
This is not because they are so effective. This is because we’re so procedural in our way of thinking. We don’t trust our objects. We only trust the data they store. We don’t want this Book object to generate the XML. We want it to give us the data. We will build the XML. The Book is too stupid to do that job. We’re way smarter!
I’m suggesting to stop thinking this way. Instead, let’s try to give this poor Book a chance, and equip it with a “printer”:
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
public String toXML() {
return String.format(
"<book><isbn>%s</isbn><title>%s</title></book>",
this.isbn, this.title
);
}
}This isn’t the best implementation, but you got the idea. The object is not exposing its internals any more. We can’t get its ISBN and its title. We can only ask it to print itself in XML format.
We can add an additional printer, if another format is required:
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
public String toJSON() {
return String.format(
"{\"isbn\":\"%s\", \"title\":\"%s\"}",
this.isbn, this.title
);
}
}Again, not the best implementation, but you see what I’m trying to show. Each time we need a new format, we create a new printer.
You may say that the object will be rather big if there will be many formats. That’s true, but a big object is a bad design in the first place. I would say that if there is more than one printer—it’s a problem.
So, what to do if we need multiple formats? Use “media,” where that printers will be able to print to. Say, we have an object that represents a record in MySQL. We want it to be printable to XML, HTML, JSON, some binary format and God knows what else. We can add that many printers to it, but the object will be big and ugly. To avoid that, introduce a new object, that represents the media where the data will be printed to:
public class Book {
private final String isbn =
"0735619654";
private final String title =
"Object Thinking";
public Media print(Media media) {
return media
.with("isbn", this.isbn)
.with("title", this.title);
}
}Again, it’s a very primitive design of that immutable Media class, but you got the idea—the media accepts the data. Now, we want to print our object to JSON (this design is not really perfect, since JsonObjectBuilder is not immutable, even though it looks like one…):
class JsonMedia implements Media {
private final JsonObjectBuilder builder;
JsonMedia() {
this("book");
}
JsonMedia(String head) {
this(Json.createObjectBuilder().add(head));
}
JsonMedia(JsonObjectBuilder bdr) {
this.builder = bdr;
}
@Override
public Media with(String name, String value) {
return new JsonMedia(
this.builder.add(name, value)
);
}
public JsonObject json() {
return this.builder.build();
}
}Now, we make an instance of JsonMedia and ask our book to print itself there:
JsonMedia media = new JsonMedia("book");
book.print(media);
JsonObject json = media.json();Voilà! The JSON object is ready and the book has no idea about what exactly what printed just now. We need to print the book to XML? We create XmlMedia, which will print the book to XML. The Book class stays small, while the complexity of “media” objects is unlimited.
My point here is simple—no getters, just printers!
Please, use syntax highlighting in your comments, to make them more readable.

This is how it looks (assuming we are in Java 6):
InputStream input = null;
try {
input = url.openStream();
// reads the stream, throws IOException
} catch (IOException ex) {
throw new RuntimeException(ex);
} finally {
if (input != null) {
input.close();
}
}I already wrote about null and its evil nature. Here it comes again. If you just follow the rule of “not using NULL anywhere ever,” this code would need an immediate refactoring. Its correct version will look like this:
final InputStream input = url.openStream();
try {
// reads the stream, throws IOException
} catch (IOException ex) {
throw new RuntimeException(ex);
} finally {
input.close();
}There is no null anymore and it’s very clean. Isn’t it?
There are situations when opening the resource itself throws IOException and we can’t put it outside of try/catch. In that case, we have to have two try/catch blocks:
final InputStream input;
try {
input = url.openStream();
} catch (IOException ex) {
throw new RuntimeException(ex);
}
try {
// reads the stream, throws IOException
} catch (IOException ex) {
throw new RuntimeException(ex);
} finally {
input.close();
}But there should be no null, never!
The presence of null in Java code is a clear indicator of code smell. Something is not right if you have to use null. The only place where the presence of null is justified is where we’re using third-party APIs or JDK. They may return null sometimes because… well, their design is bad. We have no other option but to do if(x==null). But that’s it. No other places are good for null.

This is how it looks (assuming we are in Java 6):
InputStream input = null;
try {
input = url.openStream();
// reads the stream, throws IOException
} catch (IOException ex) {
throw new RuntimeException(ex);
} finally {
if (input != null) {
input.close();
}
}I already wrote about null and its evil nature. Here it comes again. If you just follow the rule of “not using NULL anywhere ever,” this code would need an immediate refactoring. Its correct version will look like this:
final InputStream input = url.openStream();
try {
// reads the stream, throws IOException
} catch (IOException ex) {
throw new RuntimeException(ex);
} finally {
input.close();
}There is no null anymore and it’s very clean. Isn’t it?
There are situations when opening the resource itself throws IOException and we can’t put it outside of try/catch. In that case, we have to have two try/catch blocks:
final InputStream input;
try {
input = url.openStream();
} catch (IOException ex) {
throw new RuntimeException(ex);
}
try {
// reads the stream, throws IOException
} catch (IOException ex) {
throw new RuntimeException(ex);
} finally {
input.close();
}But there should be no null, never!
The presence of null in Java code is a clear indicator of code smell. Something is not right if you have to use null. The only place where the presence of null is justified is where we’re using third-party APIs or JDK. They may return null sometimes because… well, their design is bad. We have no other option but to do if(x==null). But that’s it. No other places are good for null.
https://www.yegor256.com/2016/03/22/try-finally-if-not-null.html
Try. Finally. If. Not. Null.
- Palo Alto, CA
- Yegor Bugayenko
- comments
There is a very typical mistake in pre-Java7 “try/finally” scenario, which I keep seeing in so many code reviews. I just have to write about it. Java7 introduced a solution, but it doesn’t cover all situations. Sometimes we need to deal with non-AutoCloseable resources. Let’s open and close them correctly, please.

This is how it looks (assuming we are in Java 6):
InputStream input = null;
try {
input = url.openStream();
// reads the stream, throws IOException
} catch (IOException ex) {
throw new RuntimeException(ex);
} finally {
if (input != null) {
input.close();
}
}I already wrote about null and its evil nature. Here it comes again. If you just follow the rule of “not using NULL anywhere ever,” this code would need an immediate refactoring. Its correct version will look like this:
final InputStream input = url.openStream();
try {
// reads the stream, throws IOException
} catch (IOException ex) {
throw new RuntimeException(ex);
} finally {
input.close();
}There is no null anymore and it’s very clean. Isn’t it?
There are situations when opening the resource itself throws IOException and we can’t put it outside of try/catch. In that case, we have to have two try/catch blocks:
final InputStream input;
try {
input = url.openStream();
} catch (IOException ex) {
throw new RuntimeException(ex);
}
try {
// reads the stream, throws IOException
} catch (IOException ex) {
throw new RuntimeException(ex);
} finally {
input.close();
}But there should be no null, never!
The presence of null in Java code is a clear indicator of code smell. Something is not right if you have to use null. The only place where the presence of null is justified is where we’re using third-party APIs or JDK. They may return null sometimes because… well, their design is bad. We have no other option but to do if(x==null). But that’s it. No other places are good for null.
There is a very typical mistake in pre-Java7 “try/finally” scenario, which I keep seeing in so many code reviews. I just have to write about it. Java7 introduced a solution, but it doesn’t cover all situations. Sometimes we need to deal with non-AutoCloseable resources. Let’s open and close them correctly, please.

This is how it looks (assuming we are in Java 6):
InputStream input = null;
try {
input = url.openStream();
// reads the stream, throws IOException
} catch (IOException ex) {
throw new RuntimeException(ex);
} finally {
if (input != null) {
input.close();
}
}I already wrote about null and its evil nature. Here it comes again. If you just follow the rule of “not using NULL anywhere ever,” this code would need an immediate refactoring. Its correct version will look like this:
final InputStream input = url.openStream();
try {
// reads the stream, throws IOException
} catch (IOException ex) {
throw new RuntimeException(ex);
} finally {
input.close();
}There is no null anymore and it’s very clean. Isn’t it?
There are situations when opening the resource itself throws IOException and we can’t put it outside of try/catch. In that case, we have to have two try/catch blocks:
final InputStream input;
try {
input = url.openStream();
} catch (IOException ex) {
throw new RuntimeException(ex);
}
try {
// reads the stream, throws IOException
} catch (IOException ex) {
throw new RuntimeException(ex);
} finally {
input.close();
}But there should be no null, never!
The presence of null in Java code is a clear indicator of code smell. Something is not right if you have to use null. The only place where the presence of null is justified is where we’re using third-party APIs or JDK. They may return null sometimes because… well, their design is bad. We have no other option but to do if(x==null). But that’s it. No other places are good for null.
Please, use syntax highlighting in your comments, to make them more readable.

Abstract Factory. It’s OK.
Adapter. Good one!
Bridge. Good one!
Builder. Terrible concept, since it encourages us to create and use big, complex objects. If you need a builder, there is already something wrong in your code. Refactor it so any object is easy to create through its constructors.
Chain of Responsibility. Seems fine.
Command. It’s OK.
Composite. Good one; check out this too.
Data Transfer Object. It’s just a shame.
Decorator. My favorite one. I highly recommend you use it.
Facade. Bad idea. In OOP, we need objects and only objects, not facades for them. This design pattern is very procedural in its spirit, since a facade is nothing more than a collection of procedures.
Factory Method. This one seems OK.
Flyweight. It’s a workaround, as I see it, so it’s not a good design pattern. I would recommend you not use it unless there is a really critical performance issue. But calling it a design pattern … no way. A fix for a performance problem in Java? Yes.
Front Controller. Terrible idea, as well as the entire MVC. It’s very procedural, that’s why.
Interpreter. It’s OK, but I don’t like the name. “Expression” would be a much better alternative.
Iterator. Bad idea, since it is mutable. It would be much better to have immutable “cursors.”
Lazy Initialization. It’s OK.
Marker. It’s a terrible idea, along with reflection and type casting.
MVC. Bad idea, since it’s very procedural. Controllers are the key broken element in this concept. We need real objects, not procedural controllers.
Mediator. I don’t like it. Even though it sounds like a technique for decreasing complexity and coupling, it is not really object-oriented. Who is this mediator? Just a “channel” between objects? Why shouldn’t objects communicate directly? Because they are too complex? Make them smaller and simpler, rather than inventing these mediators.
Memento. This idea implies that objects are mutable, which I’m against in general.
Module. If Wikipedia is right about this pattern, it’s something even more terrible than the Singleton.
Multiton. Really bad idea. Same as Singleton.
Null Object. Good one. By the way, see Why NULL Is Bad
Object Pool. Good one.
Observer. The idea is good, but the name is bad, since it ends with -ER. A much better one would be “Source” and “Target.” The Source generates events and the Target listens to them.
ORM. It’s terrible and “offensive”; check this out.
Prototype. Good idea, but what does it have to do with OOP?
Proxy. Good one.
RAII. This is a really good one, and I highly recommend you use it.
Servant. A very bad idea, because it’s highly procedural.
Singleton. It’s the king of all anti-patterns. Stay away from it at all costs.
Specification. It’s OK.
State. Although it’s not implied, I feel that in most cases the use of this pattern results in mutability, a code characteristic that I’m generally against.
Strategy. A good one.
Template Method. is wrong, since implementation inheritance is procedural.
Visitor. A rather procedural concept that treats objects as data structures, which we can manipulate.
I have nothing against concurrency patterns either; they are all good, since they have almost nothing to do with object-oriented programming.
If you know some other design (anti-)patterns, let me know in the comments below. I’ll add them here.
" /> Design Patterns are … Come on, you know what they are. They are something we love and hate. We love them because they let us write code without thinking. We hate them when we see the code of someone who is used to writing code without thinking. Am I wrong? Now, let me try to go through all of them and show you how much I love or hate each one. Follow me, in alphabetic order.
Abstract Factory. It’s OK.
Adapter. Good one!
Bridge. Good one!
Builder. Terrible concept, since it encourages us to create and use big, complex objects. If you need a builder, there is already something wrong in your code. Refactor it so any object is easy to create through its constructors.
Chain of Responsibility. Seems fine.
Command. It’s OK.
Composite. Good one; check out this too.
Data Transfer Object. It’s just a shame.
Decorator. My favorite one. I highly recommend you use it.
Facade. Bad idea. In OOP, we need objects and only objects, not facades for them. This design pattern is very procedural in its spirit, since a facade is nothing more than a collection of procedures.
Factory Method. This one seems OK.
Flyweight. It’s a workaround, as I see it, so it’s not a good design pattern. I would recommend you not use it unless there is a really critical performance issue. But calling it a design pattern … no way. A fix for a performance problem in Java? Yes.
Front Controller. Terrible idea, as well as the entire MVC. It’s very procedural, that’s why.
Interpreter. It’s OK, but I don’t like the name. “Expression” would be a much better alternative.
Iterator. Bad idea, since it is mutable. It would be much better to have immutable “cursors.”
Lazy Initialization. It’s OK.
Marker. It’s a terrible idea, along with reflection and type casting.
MVC. Bad idea, since it’s very procedural. Controllers are the key broken element in this concept. We need real objects, not procedural controllers.
Mediator. I don’t like it. Even though it sounds like a technique for decreasing complexity and coupling, it is not really object-oriented. Who is this mediator? Just a “channel” between objects? Why shouldn’t objects communicate directly? Because they are too complex? Make them smaller and simpler, rather than inventing these mediators.
Memento. This idea implies that objects are mutable, which I’m against in general.
Module. If Wikipedia is right about this pattern, it’s something even more terrible than the Singleton.
Multiton. Really bad idea. Same as Singleton.
Null Object. Good one. By the way, see Why NULL Is Bad
Object Pool. Good one.
Observer. The idea is good, but the name is bad, since it ends with -ER. A much better one would be “Source” and “Target.” The Source generates events and the Target listens to them.
ORM. It’s terrible and “offensive”; check this out.
Prototype. Good idea, but what does it have to do with OOP?
Proxy. Good one.
RAII. This is a really good one, and I highly recommend you use it.
Servant. A very bad idea, because it’s highly procedural.
Singleton. It’s the king of all anti-patterns. Stay away from it at all costs.
Specification. It’s OK.
State. Although it’s not implied, I feel that in most cases the use of this pattern results in mutability, a code characteristic that I’m generally against.
Strategy. A good one.
Template Method. is wrong, since implementation inheritance is procedural.
Visitor. A rather procedural concept that treats objects as data structures, which we can manipulate.
I have nothing against concurrency patterns either; they are all good, since they have almost nothing to do with object-oriented programming.
If you know some other design (anti-)patterns, let me know in the comments below. I’ll add them here.
"/>
https://www.yegor256.com/2016/02/03/design-patterns-and-anti-patterns.html
Design Patterns and Anti-Patterns, Love and Hate
- Palo Alto, CA
- Yegor Bugayenko
- comments
Design Patterns are … Come on, you know what they are. They are something we love and hate. We love them because they let us write code without thinking. We hate them when we see the code of someone who is used to writing code without thinking. Am I wrong? Now, let me try to go through all of them and show you how much I love or hate each one. Follow me, in alphabetic order.

Abstract Factory. It’s OK.
Adapter. Good one!
Bridge. Good one!
Builder. Terrible concept, since it encourages us to create and use big, complex objects. If you need a builder, there is already something wrong in your code. Refactor it so any object is easy to create through its constructors.
Chain of Responsibility. Seems fine.
Command. It’s OK.
Composite. Good one; check out this too.
Data Transfer Object. It’s just a shame.
Decorator. My favorite one. I highly recommend you use it.
Facade. Bad idea. In OOP, we need objects and only objects, not facades for them. This design pattern is very procedural in its spirit, since a facade is nothing more than a collection of procedures.
Factory Method. This one seems OK.
Flyweight. It’s a workaround, as I see it, so it’s not a good design pattern. I would recommend you not use it unless there is a really critical performance issue. But calling it a design pattern … no way. A fix for a performance problem in Java? Yes.
Front Controller. Terrible idea, as well as the entire MVC. It’s very procedural, that’s why.
Interpreter. It’s OK, but I don’t like the name. “Expression” would be a much better alternative.
Iterator. Bad idea, since it is mutable. It would be much better to have immutable “cursors.”
Lazy Initialization. It’s OK.
Marker. It’s a terrible idea, along with reflection and type casting.
MVC. Bad idea, since it’s very procedural. Controllers are the key broken element in this concept. We need real objects, not procedural controllers.
Mediator. I don’t like it. Even though it sounds like a technique for decreasing complexity and coupling, it is not really object-oriented. Who is this mediator? Just a “channel” between objects? Why shouldn’t objects communicate directly? Because they are too complex? Make them smaller and simpler, rather than inventing these mediators.
Memento. This idea implies that objects are mutable, which I’m against in general.
Module. If Wikipedia is right about this pattern, it’s something even more terrible than the Singleton.
Multiton. Really bad idea. Same as Singleton.
Null Object. Good one. By the way, see Why NULL Is Bad
Object Pool. Good one.
Observer. The idea is good, but the name is bad, since it ends with -ER. A much better one would be “Source” and “Target.” The Source generates events and the Target listens to them.
ORM. It’s terrible and “offensive”; check this out.
Prototype. Good idea, but what does it have to do with OOP?
Proxy. Good one.
RAII. This is a really good one, and I highly recommend you use it.
Servant. A very bad idea, because it’s highly procedural.
Singleton. It’s the king of all anti-patterns. Stay away from it at all costs.
Specification. It’s OK.
State. Although it’s not implied, I feel that in most cases the use of this pattern results in mutability, a code characteristic that I’m generally against.
Strategy. A good one.
Template Method. is wrong, since implementation inheritance is procedural.
Visitor. A rather procedural concept that treats objects as data structures, which we can manipulate.
I have nothing against concurrency patterns either; they are all good, since they have almost nothing to do with object-oriented programming.
If you know some other design (anti-)patterns, let me know in the comments below. I’ll add them here.
Design Patterns are … Come on, you know what they are. They are something we love and hate. We love them because they let us write code without thinking. We hate them when we see the code of someone who is used to writing code without thinking. Am I wrong? Now, let me try to go through all of them and show you how much I love or hate each one. Follow me, in alphabetic order.

Abstract Factory. It’s OK.
Adapter. Good one!
Bridge. Good one!
Builder. Terrible concept, since it encourages us to create and use big, complex objects. If you need a builder, there is already something wrong in your code. Refactor it so any object is easy to create through its constructors.
Chain of Responsibility. Seems fine.
Command. It’s OK.
Composite. Good one; check out this too.
Data Transfer Object. It’s just a shame.
Decorator. My favorite one. I highly recommend you use it.
Facade. Bad idea. In OOP, we need objects and only objects, not facades for them. This design pattern is very procedural in its spirit, since a facade is nothing more than a collection of procedures.
Factory Method. This one seems OK.
Flyweight. It’s a workaround, as I see it, so it’s not a good design pattern. I would recommend you not use it unless there is a really critical performance issue. But calling it a design pattern … no way. A fix for a performance problem in Java? Yes.
Front Controller. Terrible idea, as well as the entire MVC. It’s very procedural, that’s why.
Interpreter. It’s OK, but I don’t like the name. “Expression” would be a much better alternative.
Iterator. Bad idea, since it is mutable. It would be much better to have immutable “cursors.”
Lazy Initialization. It’s OK.
Marker. It’s a terrible idea, along with reflection and type casting.
MVC. Bad idea, since it’s very procedural. Controllers are the key broken element in this concept. We need real objects, not procedural controllers.
Mediator. I don’t like it. Even though it sounds like a technique for decreasing complexity and coupling, it is not really object-oriented. Who is this mediator? Just a “channel” between objects? Why shouldn’t objects communicate directly? Because they are too complex? Make them smaller and simpler, rather than inventing these mediators.
Memento. This idea implies that objects are mutable, which I’m against in general.
Module. If Wikipedia is right about this pattern, it’s something even more terrible than the Singleton.
Multiton. Really bad idea. Same as Singleton.
Null Object. Good one. By the way, see Why NULL Is Bad
Object Pool. Good one.
Observer. The idea is good, but the name is bad, since it ends with -ER. A much better one would be “Source” and “Target.” The Source generates events and the Target listens to them.
ORM. It’s terrible and “offensive”; check this out.
Prototype. Good idea, but what does it have to do with OOP?
Proxy. Good one.
RAII. This is a really good one, and I highly recommend you use it.
Servant. A very bad idea, because it’s highly procedural.
Singleton. It’s the king of all anti-patterns. Stay away from it at all costs.
Specification. It’s OK.
State. Although it’s not implied, I feel that in most cases the use of this pattern results in mutability, a code characteristic that I’m generally against.
Strategy. A good one.
Template Method. is wrong, since implementation inheritance is procedural.
Visitor. A rather procedural concept that treats objects as data structures, which we can manipulate.
I have nothing against concurrency patterns either; they are all good, since they have almost nothing to do with object-oriented programming.
If you know some other design (anti-)patterns, let me know in the comments below. I’ll add them here.
Please, use syntax highlighting in your comments, to make them more readable.

Let’s take a look at this rather typical Java example:
class Report {
void export(File file) {
if (file == null) {
throw new IllegalArgumentException(
"File is NULL; can't export."
);
}
if (file.exists()) {
throw new IllegalArgumentException(
"File already exists."
);
}
// Export the report to the file
}
}Pretty defensive, right? If we remove these validations, the code will be much shorter, but it will crash with rather confusing messages if NULL is provided by the client. Moreover, if the file already exists, our Report will silently overwrite it. Pretty dangerous, right?
Yes, we must protect ourselves, and we must be defensive.
But not this way, not by bloating the class with validations that have nothing to do with its core functionality. Instead, we should use decorators to do the validation. Here is how. First, there must be an interface Report:
interface Report {
void export(File file);
}Then, a class that implements the core functionality:
class DefaultReport implements Report {
@Override
void export(File file) {
// Export the report to the file
}
}And, finally, a number of decorators that will protect us:
class NoWriteOverReport implements Report {
private final Report origin;
NoWriteOverReport(Report rep) {
this.origin = rep;
}
@Override
void export(File file) {
if (file.exists()) {
throw new IllegalArgumentException(
"File already exists."
);
}
this.origin.export(file);
}
}Now, the client has the flexibility of composing a complex object from decorators that perform their specific tasks. The core object will do the reporting, while the decorators will validate parameters:
Report report = new NoNullReport(
new NoWriteOverReport(
new DefaultReport()
)
);
report.export(file);What do we achieve with this approach? First and foremost: smaller objects. And smaller objects always mean higher maintainability. Our DefaultReport class will always remain small, no matter how many validations we may invent in the future. The more things we need to validate, the more validating decorators we will create. All of them will be small and cohesive. And we’ll be able to put them together in different variations.
Besides that, this approach makes our code much more reusable, as classes perform very few operations and don’t defend themselves by default. While being defensive is an important feature, we’ll use validating decorators. But this will not always be the case. Sometimes validation is just too expensive in terms of time and memory, and we may want to work directly with objects that don’t defend themselves.
I also decided not to use the Java Validation API anymore for the same reason. Its annotations make classes much more verbose and less cohesive. I’m using validating decorators instead.
" />
Let’s take a look at this rather typical Java example:
class Report {
void export(File file) {
if (file == null) {
throw new IllegalArgumentException(
"File is NULL; can't export."
);
}
if (file.exists()) {
throw new IllegalArgumentException(
"File already exists."
);
}
// Export the report to the file
}
}Pretty defensive, right? If we remove these validations, the code will be much shorter, but it will crash with rather confusing messages if NULL is provided by the client. Moreover, if the file already exists, our Report will silently overwrite it. Pretty dangerous, right?
Yes, we must protect ourselves, and we must be defensive.
But not this way, not by bloating the class with validations that have nothing to do with its core functionality. Instead, we should use decorators to do the validation. Here is how. First, there must be an interface Report:
interface Report {
void export(File file);
}Then, a class that implements the core functionality:
class DefaultReport implements Report {
@Override
void export(File file) {
// Export the report to the file
}
}And, finally, a number of decorators that will protect us:
class NoWriteOverReport implements Report {
private final Report origin;
NoWriteOverReport(Report rep) {
this.origin = rep;
}
@Override
void export(File file) {
if (file.exists()) {
throw new IllegalArgumentException(
"File already exists."
);
}
this.origin.export(file);
}
}Now, the client has the flexibility of composing a complex object from decorators that perform their specific tasks. The core object will do the reporting, while the decorators will validate parameters:
Report report = new NoNullReport(
new NoWriteOverReport(
new DefaultReport()
)
);
report.export(file);What do we achieve with this approach? First and foremost: smaller objects. And smaller objects always mean higher maintainability. Our DefaultReport class will always remain small, no matter how many validations we may invent in the future. The more things we need to validate, the more validating decorators we will create. All of them will be small and cohesive. And we’ll be able to put them together in different variations.
Besides that, this approach makes our code much more reusable, as classes perform very few operations and don’t defend themselves by default. While being defensive is an important feature, we’ll use validating decorators. But this will not always be the case. Sometimes validation is just too expensive in terms of time and memory, and we may want to work directly with objects that don’t defend themselves.
I also decided not to use the Java Validation API anymore for the same reason. Its annotations make classes much more verbose and less cohesive. I’m using validating decorators instead.
"/>
https://www.yegor256.com/2016/01/26/defensive-programming.html
Defensive Programming via Validating Decorators
- Palo Alto, CA
- Yegor Bugayenko
- comments
Do you check the input parameters of your methods for validity? I don’t. I used to, but not anymore. I just let my methods crash with a null pointer and other exceptions when parameters are not valid. This may sound illogical, but only in the beginning. I’m suggesting you use validating decorators instead.

Let’s take a look at this rather typical Java example:
class Report {
void export(File file) {
if (file == null) {
throw new IllegalArgumentException(
"File is NULL; can't export."
);
}
if (file.exists()) {
throw new IllegalArgumentException(
"File already exists."
);
}
// Export the report to the file
}
}Pretty defensive, right? If we remove these validations, the code will be much shorter, but it will crash with rather confusing messages if NULL is provided by the client. Moreover, if the file already exists, our Report will silently overwrite it. Pretty dangerous, right?
Yes, we must protect ourselves, and we must be defensive.
But not this way, not by bloating the class with validations that have nothing to do with its core functionality. Instead, we should use decorators to do the validation. Here is how. First, there must be an interface Report:
interface Report {
void export(File file);
}Then, a class that implements the core functionality:
class DefaultReport implements Report {
@Override
void export(File file) {
// Export the report to the file
}
}And, finally, a number of decorators that will protect us:
class NoWriteOverReport implements Report {
private final Report origin;
NoWriteOverReport(Report rep) {
this.origin = rep;
}
@Override
void export(File file) {
if (file.exists()) {
throw new IllegalArgumentException(
"File already exists."
);
}
this.origin.export(file);
}
}Now, the client has the flexibility of composing a complex object from decorators that perform their specific tasks. The core object will do the reporting, while the decorators will validate parameters:
Report report = new NoNullReport(
new NoWriteOverReport(
new DefaultReport()
)
);
report.export(file);What do we achieve with this approach? First and foremost: smaller objects. And smaller objects always mean higher maintainability. Our DefaultReport class will always remain small, no matter how many validations we may invent in the future. The more things we need to validate, the more validating decorators we will create. All of them will be small and cohesive. And we’ll be able to put them together in different variations.
Besides that, this approach makes our code much more reusable, as classes perform very few operations and don’t defend themselves by default. While being defensive is an important feature, we’ll use validating decorators. But this will not always be the case. Sometimes validation is just too expensive in terms of time and memory, and we may want to work directly with objects that don’t defend themselves.
I also decided not to use the Java Validation API anymore for the same reason. Its annotations make classes much more verbose and less cohesive. I’m using validating decorators instead.
Do you check the input parameters of your methods for validity? I don’t. I used to, but not anymore. I just let my methods crash with a null pointer and other exceptions when parameters are not valid. This may sound illogical, but only in the beginning. I’m suggesting you use validating decorators instead.

Let’s take a look at this rather typical Java example:
class Report {
void export(File file) {
if (file == null) {
throw new IllegalArgumentException(
"File is NULL; can't export."
);
}
if (file.exists()) {
throw new IllegalArgumentException(
"File already exists."
);
}
// Export the report to the file
}
}Pretty defensive, right? If we remove these validations, the code will be much shorter, but it will crash with rather confusing messages if NULL is provided by the client. Moreover, if the file already exists, our Report will silently overwrite it. Pretty dangerous, right?
Yes, we must protect ourselves, and we must be defensive.
But not this way, not by bloating the class with validations that have nothing to do with its core functionality. Instead, we should use decorators to do the validation. Here is how. First, there must be an interface Report:
interface Report {
void export(File file);
}Then, a class that implements the core functionality:
class DefaultReport implements Report {
@Override
void export(File file) {
// Export the report to the file
}
}And, finally, a number of decorators that will protect us:
class NoWriteOverReport implements Report {
private final Report origin;
NoWriteOverReport(Report rep) {
this.origin = rep;
}
@Override
void export(File file) {
if (file.exists()) {
throw new IllegalArgumentException(
"File already exists."
);
}
this.origin.export(file);
}
}Now, the client has the flexibility of composing a complex object from decorators that perform their specific tasks. The core object will do the reporting, while the decorators will validate parameters:
Report report = new NoNullReport(
new NoWriteOverReport(
new DefaultReport()
)
);
report.export(file);What do we achieve with this approach? First and foremost: smaller objects. And smaller objects always mean higher maintainability. Our DefaultReport class will always remain small, no matter how many validations we may invent in the future. The more things we need to validate, the more validating decorators we will create. All of them will be small and cohesive. And we’ll be able to put them together in different variations.
Besides that, this approach makes our code much more reusable, as classes perform very few operations and don’t defend themselves by default. While being defensive is an important feature, we’ll use validating decorators. But this will not always be the case. Sometimes validation is just too expensive in terms of time and memory, and we may want to work directly with objects that don’t defend themselves.
I also decided not to use the Java Validation API anymore for the same reason. Its annotations make classes much more verbose and less cohesive. I’m using validating decorators instead.
Please, use syntax highlighting in your comments, to make them more readable.

Here is the code:
class Foo {
public List<String> names() {
List<String> list = new LinkedList();
Foo.append(list, "Jeff");
Foo.append(list, "Walter");
return list;
}
private static void append(
List<String> list, String item) {
list.add(item.toLowerCase());
}
}What do you think about that? I believe it’s clear what names() is doing—creating a list of names. In order to avoid duplication, there is a supplementary procedure, append(), which converts an item to lowercase and adds it to the list.
This is poor design.
It is a procedural design, and there is temporal coupling between lines in method names().
Let me first show you a better (though not the best!) design, then I will try to explain its benefits:
class Foo {
public List<String> names() {
return Foo.with(
Foo.with(
new LinkedList(),
"Jeff"
),
"Walter"
);
}
private static List<String> with(
List<String> list, String item) {
list.add(item.toLowerCase());
return list;
}
}An ideal design for method with() would create a new instance of List, populate it through addAll(list), then add(item) to it, and finally return. That would be perfectly immutable, but slow.
So, what is wrong with this:
List<String> list = new LinkedList();
Foo.append(list, "Jeff");
Foo.append(list, "Walter");
return list;It looks perfectly clean, doesn’t it? Instantiate a list, append two items to it, and return it. Yes, it is clean—for now. Because we remember what append() is doing. In a few months, we’ll get back to this code, and it will look like this:
List<String> list = new LinkedList();
// 10 more lines here
Foo.append(list, "Jeff");
Foo.append(list, "Walter");
// 10 more lines here
return list;Is it so clear now that append() is actually adding "Jeff" to list? What will happen if I remove that line? Will it affect the result being returned in the last line? I don’t know. I need to check the body of method append() to make sure.
Also, how about returning list first and calling append() afterwards? This is what possible “refactoring” may do to our code:
List<String> list = new LinkedList();
if (/* something */) {
return list;
}
// 10 more lines here
Foo.append(list, "Walter");
Foo.append(list, "Jeff");
// 10 more lines here
return list;First of all, we return list too early, when it is not ready. But did anyone tell me that these two calls to append() must happen before return list? Second, we changed the order of append() calls. Again, did anyone tell me that it’s important to call them in that particular order?
Nobody. Nowhere. This is called temporal coupling.
Our lines are coupled together. They must stay in this particular order, but the knowledge about that order is hidden. It’s easy to destroy the order, and our compiler won’t be able to catch us.
To the contrary, this design doesn’t have any “order”:
return Foo.with(
Foo.with(
new LinkedList(),
"Jeff"
),
"Walter"
);It just returns a list, which is constructed by a few calls to the with() method. It is a single line instead of four.
As discussed before, an ideal method in OOP must have just a single statement, and this statement is return.
The same is true about validation. For example, this code is bad:
list.add("Jeff");
Foo.checkIfListStillHasSpace(list);
list.add("Walter");While this one is much better:
list.add("Jeff");
Foo.withEnoughSpace(list).add("Walter");See the difference?
And, of course, an ideal approach would be to use composable decorators instead of these ugly static methods. But if it’s not possible for some reason, just don’t make those static methods look like procedures. Make sure they always return results, which become arguments to further calls.
" />
Here is the code:
class Foo {
public List<String> names() {
List<String> list = new LinkedList();
Foo.append(list, "Jeff");
Foo.append(list, "Walter");
return list;
}
private static void append(
List<String> list, String item) {
list.add(item.toLowerCase());
}
}What do you think about that? I believe it’s clear what names() is doing—creating a list of names. In order to avoid duplication, there is a supplementary procedure, append(), which converts an item to lowercase and adds it to the list.
This is poor design.
It is a procedural design, and there is temporal coupling between lines in method names().
Let me first show you a better (though not the best!) design, then I will try to explain its benefits:
class Foo {
public List<String> names() {
return Foo.with(
Foo.with(
new LinkedList(),
"Jeff"
),
"Walter"
);
}
private static List<String> with(
List<String> list, String item) {
list.add(item.toLowerCase());
return list;
}
}An ideal design for method with() would create a new instance of List, populate it through addAll(list), then add(item) to it, and finally return. That would be perfectly immutable, but slow.
So, what is wrong with this:
List<String> list = new LinkedList();
Foo.append(list, "Jeff");
Foo.append(list, "Walter");
return list;It looks perfectly clean, doesn’t it? Instantiate a list, append two items to it, and return it. Yes, it is clean—for now. Because we remember what append() is doing. In a few months, we’ll get back to this code, and it will look like this:
List<String> list = new LinkedList();
// 10 more lines here
Foo.append(list, "Jeff");
Foo.append(list, "Walter");
// 10 more lines here
return list;Is it so clear now that append() is actually adding "Jeff" to list? What will happen if I remove that line? Will it affect the result being returned in the last line? I don’t know. I need to check the body of method append() to make sure.
Also, how about returning list first and calling append() afterwards? This is what possible “refactoring” may do to our code:
List<String> list = new LinkedList();
if (/* something */) {
return list;
}
// 10 more lines here
Foo.append(list, "Walter");
Foo.append(list, "Jeff");
// 10 more lines here
return list;First of all, we return list too early, when it is not ready. But did anyone tell me that these two calls to append() must happen before return list? Second, we changed the order of append() calls. Again, did anyone tell me that it’s important to call them in that particular order?
Nobody. Nowhere. This is called temporal coupling.
Our lines are coupled together. They must stay in this particular order, but the knowledge about that order is hidden. It’s easy to destroy the order, and our compiler won’t be able to catch us.
To the contrary, this design doesn’t have any “order”:
return Foo.with(
Foo.with(
new LinkedList(),
"Jeff"
),
"Walter"
);It just returns a list, which is constructed by a few calls to the with() method. It is a single line instead of four.
As discussed before, an ideal method in OOP must have just a single statement, and this statement is return.
The same is true about validation. For example, this code is bad:
list.add("Jeff");
Foo.checkIfListStillHasSpace(list);
list.add("Walter");While this one is much better:
list.add("Jeff");
Foo.withEnoughSpace(list).add("Walter");See the difference?
And, of course, an ideal approach would be to use composable decorators instead of these ugly static methods. But if it’s not possible for some reason, just don’t make those static methods look like procedures. Make sure they always return results, which become arguments to further calls.
"/>
https://www.yegor256.com/2015/12/08/temporal-coupling-between-method-calls.html
Temporal Coupling Between Method Calls
- Kiev, Ukraine
- Yegor Bugayenko
- comments
Temporal coupling happens between sequential method calls when they must stay in a particular order. This is inevitable in imperative programming, but we can reduce the negative effect of it just by turning those static procedures into functions. Take a look at this example.

Here is the code:
class Foo {
public List<String> names() {
List<String> list = new LinkedList();
Foo.append(list, "Jeff");
Foo.append(list, "Walter");
return list;
}
private static void append(
List<String> list, String item) {
list.add(item.toLowerCase());
}
}What do you think about that? I believe it’s clear what names() is doing—creating a list of names. In order to avoid duplication, there is a supplementary procedure, append(), which converts an item to lowercase and adds it to the list.
This is poor design.
It is a procedural design, and there is temporal coupling between lines in method names().
Let me first show you a better (though not the best!) design, then I will try to explain its benefits:
class Foo {
public List<String> names() {
return Foo.with(
Foo.with(
new LinkedList(),
"Jeff"
),
"Walter"
);
}
private static List<String> with(
List<String> list, String item) {
list.add(item.toLowerCase());
return list;
}
}An ideal design for method with() would create a new instance of List, populate it through addAll(list), then add(item) to it, and finally return. That would be perfectly immutable, but slow.
So, what is wrong with this:
List<String> list = new LinkedList();
Foo.append(list, "Jeff");
Foo.append(list, "Walter");
return list;It looks perfectly clean, doesn’t it? Instantiate a list, append two items to it, and return it. Yes, it is clean—for now. Because we remember what append() is doing. In a few months, we’ll get back to this code, and it will look like this:
List<String> list = new LinkedList();
// 10 more lines here
Foo.append(list, "Jeff");
Foo.append(list, "Walter");
// 10 more lines here
return list;Is it so clear now that append() is actually adding "Jeff" to list? What will happen if I remove that line? Will it affect the result being returned in the last line? I don’t know. I need to check the body of method append() to make sure.
Also, how about returning list first and calling append() afterwards? This is what possible “refactoring” may do to our code:
List<String> list = new LinkedList();
if (/* something */) {
return list;
}
// 10 more lines here
Foo.append(list, "Walter");
Foo.append(list, "Jeff");
// 10 more lines here
return list;First of all, we return list too early, when it is not ready. But did anyone tell me that these two calls to append() must happen before return list? Second, we changed the order of append() calls. Again, did anyone tell me that it’s important to call them in that particular order?
Nobody. Nowhere. This is called temporal coupling.
Our lines are coupled together. They must stay in this particular order, but the knowledge about that order is hidden. It’s easy to destroy the order, and our compiler won’t be able to catch us.
To the contrary, this design doesn’t have any “order”:
return Foo.with(
Foo.with(
new LinkedList(),
"Jeff"
),
"Walter"
);It just returns a list, which is constructed by a few calls to the with() method. It is a single line instead of four.
As discussed before, an ideal method in OOP must have just a single statement, and this statement is return.
The same is true about validation. For example, this code is bad:
list.add("Jeff");
Foo.checkIfListStillHasSpace(list);
list.add("Walter");While this one is much better:
list.add("Jeff");
Foo.withEnoughSpace(list).add("Walter");See the difference?
And, of course, an ideal approach would be to use composable decorators instead of these ugly static methods. But if it’s not possible for some reason, just don’t make those static methods look like procedures. Make sure they always return results, which become arguments to further calls.
Temporal coupling happens between sequential method calls when they must stay in a particular order. This is inevitable in imperative programming, but we can reduce the negative effect of it just by turning those static procedures into functions. Take a look at this example.

Here is the code:
class Foo {
public List<String> names() {
List<String> list = new LinkedList();
Foo.append(list, "Jeff");
Foo.append(list, "Walter");
return list;
}
private static void append(
List<String> list, String item) {
list.add(item.toLowerCase());
}
}What do you think about that? I believe it’s clear what names() is doing—creating a list of names. In order to avoid duplication, there is a supplementary procedure, append(), which converts an item to lowercase and adds it to the list.
This is poor design.
It is a procedural design, and there is temporal coupling between lines in method names().
Let me first show you a better (though not the best!) design, then I will try to explain its benefits:
class Foo {
public List<String> names() {
return Foo.with(
Foo.with(
new LinkedList(),
"Jeff"
),
"Walter"
);
}
private static List<String> with(
List<String> list, String item) {
list.add(item.toLowerCase());
return list;
}
}An ideal design for method with() would create a new instance of List, populate it through addAll(list), then add(item) to it, and finally return. That would be perfectly immutable, but slow.
So, what is wrong with this:
List<String> list = new LinkedList();
Foo.append(list, "Jeff");
Foo.append(list, "Walter");
return list;It looks perfectly clean, doesn’t it? Instantiate a list, append two items to it, and return it. Yes, it is clean—for now. Because we remember what append() is doing. In a few months, we’ll get back to this code, and it will look like this:
List<String> list = new LinkedList();
// 10 more lines here
Foo.append(list, "Jeff");
Foo.append(list, "Walter");
// 10 more lines here
return list;Is it so clear now that append() is actually adding "Jeff" to list? What will happen if I remove that line? Will it affect the result being returned in the last line? I don’t know. I need to check the body of method append() to make sure.
Also, how about returning list first and calling append() afterwards? This is what possible “refactoring” may do to our code:
List<String> list = new LinkedList();
if (/* something */) {
return list;
}
// 10 more lines here
Foo.append(list, "Walter");
Foo.append(list, "Jeff");
// 10 more lines here
return list;First of all, we return list too early, when it is not ready. But did anyone tell me that these two calls to append() must happen before return list? Second, we changed the order of append() calls. Again, did anyone tell me that it’s important to call them in that particular order?
Nobody. Nowhere. This is called temporal coupling.
Our lines are coupled together. They must stay in this particular order, but the knowledge about that order is hidden. It’s easy to destroy the order, and our compiler won’t be able to catch us.
To the contrary, this design doesn’t have any “order”:
return Foo.with(
Foo.with(
new LinkedList(),
"Jeff"
),
"Walter"
);It just returns a list, which is constructed by a few calls to the with() method. It is a single line instead of four.
As discussed before, an ideal method in OOP must have just a single statement, and this statement is return.
The same is true about validation. For example, this code is bad:
list.add("Jeff");
Foo.checkIfListStillHasSpace(list);
list.add("Walter");While this one is much better:
list.add("Jeff");
Foo.withEnoughSpace(list).add("Walter");See the difference?
And, of course, an ideal approach would be to use composable decorators instead of these ugly static methods. But if it’s not possible for some reason, just don’t make those static methods look like procedures. Make sure they always return results, which become arguments to further calls.
Please, use syntax highlighting in your comments, to make them more readable.

Let’s say I have a list of numbers:
interface Numbers {
Iterable<Integer> iterate();
}Now I want to create a list that will only have odd, unique, positive, and sorted numbers. The first approach is vertical (I just made this name up):
Numbers numbers = new Sorted(
new Unique(
new Odds(
new Positive(
new ArrayNumbers(
new Integer[] {
-1, 78, 4, -34, 98, 4,
}
)
)
)
)
);The second approach is horizontal (again, a name I made up):
Numbers numbers = new Modified(
new ArrayNumbers(
new Integer[] {
-1, 78, 4, -34, 98, 4,
}
),
new Diff[] {
new Positive(),
new Odds(),
new Unique(),
new Sorted(),
}
);See the difference? The first approach decorates ArrayNumbers “vertically,” adding functionality through the composable decorators Positive, Odds, Unique, and Sorted.
The second approach introduces the new interface Diff, which implements the core functionality of iterating numbers through instances of Positive, Odds, Unique, and Sorted:
interface Diff {
Iterable<Integer> apply(Iterable<Integer> origin);
}For the user of numbers, both approaches are the same. The difference is only in the design. Which one is better and when? It seems that vertical decorating is easier to implement and is more suitable for smaller objects that expose just a few methods.
As for my experience, I always tend to start with vertical decorating since it’s easier to implement but eventually migrate to a horizontal one when the number of decorators starts to grow.
" /> decorator pattern is one of the best ways to add features to an object without changing its interface. I use composable decorators quite often and always question myself as to how to design them right when the list of features must be configurable. I’m not sure I have the right answer, but here is some food for thought.
Let’s say I have a list of numbers:
interface Numbers {
Iterable<Integer> iterate();
}Now I want to create a list that will only have odd, unique, positive, and sorted numbers. The first approach is vertical (I just made this name up):
Numbers numbers = new Sorted(
new Unique(
new Odds(
new Positive(
new ArrayNumbers(
new Integer[] {
-1, 78, 4, -34, 98, 4,
}
)
)
)
)
);The second approach is horizontal (again, a name I made up):
Numbers numbers = new Modified(
new ArrayNumbers(
new Integer[] {
-1, 78, 4, -34, 98, 4,
}
),
new Diff[] {
new Positive(),
new Odds(),
new Unique(),
new Sorted(),
}
);See the difference? The first approach decorates ArrayNumbers “vertically,” adding functionality through the composable decorators Positive, Odds, Unique, and Sorted.
The second approach introduces the new interface Diff, which implements the core functionality of iterating numbers through instances of Positive, Odds, Unique, and Sorted:
interface Diff {
Iterable<Integer> apply(Iterable<Integer> origin);
}For the user of numbers, both approaches are the same. The difference is only in the design. Which one is better and when? It seems that vertical decorating is easier to implement and is more suitable for smaller objects that expose just a few methods.
As for my experience, I always tend to start with vertical decorating since it’s easier to implement but eventually migrate to a horizontal one when the number of decorators starts to grow.
"/>
https://www.yegor256.com/2015/10/01/vertical-horizontal-decorating.html
Vertical and Horizontal Decorating
- Moscow, Russia
- Yegor Bugayenko
- comments
A decorator pattern is one of the best ways to add features to an object without changing its interface. I use composable decorators quite often and always question myself as to how to design them right when the list of features must be configurable. I’m not sure I have the right answer, but here is some food for thought.

Let’s say I have a list of numbers:
interface Numbers {
Iterable<Integer> iterate();
}Now I want to create a list that will only have odd, unique, positive, and sorted numbers. The first approach is vertical (I just made this name up):
Numbers numbers = new Sorted(
new Unique(
new Odds(
new Positive(
new ArrayNumbers(
new Integer[] {
-1, 78, 4, -34, 98, 4,
}
)
)
)
)
);The second approach is horizontal (again, a name I made up):
Numbers numbers = new Modified(
new ArrayNumbers(
new Integer[] {
-1, 78, 4, -34, 98, 4,
}
),
new Diff[] {
new Positive(),
new Odds(),
new Unique(),
new Sorted(),
}
);See the difference? The first approach decorates ArrayNumbers “vertically,” adding functionality through the composable decorators Positive, Odds, Unique, and Sorted.
The second approach introduces the new interface Diff, which implements the core functionality of iterating numbers through instances of Positive, Odds, Unique, and Sorted:
interface Diff {
Iterable<Integer> apply(Iterable<Integer> origin);
}For the user of numbers, both approaches are the same. The difference is only in the design. Which one is better and when? It seems that vertical decorating is easier to implement and is more suitable for smaller objects that expose just a few methods.
As for my experience, I always tend to start with vertical decorating since it’s easier to implement but eventually migrate to a horizontal one when the number of decorators starts to grow.
A decorator pattern is one of the best ways to add features to an object without changing its interface. I use composable decorators quite often and always question myself as to how to design them right when the list of features must be configurable. I’m not sure I have the right answer, but here is some food for thought.

Let’s say I have a list of numbers:
interface Numbers {
Iterable<Integer> iterate();
}Now I want to create a list that will only have odd, unique, positive, and sorted numbers. The first approach is vertical (I just made this name up):
Numbers numbers = new Sorted(
new Unique(
new Odds(
new Positive(
new ArrayNumbers(
new Integer[] {
-1, 78, 4, -34, 98, 4,
}
)
)
)
)
);The second approach is horizontal (again, a name I made up):
Numbers numbers = new Modified(
new ArrayNumbers(
new Integer[] {
-1, 78, 4, -34, 98, 4,
}
),
new Diff[] {
new Positive(),
new Odds(),
new Unique(),
new Sorted(),
}
);See the difference? The first approach decorates ArrayNumbers “vertically,” adding functionality through the composable decorators Positive, Odds, Unique, and Sorted.
The second approach introduces the new interface Diff, which implements the core functionality of iterating numbers through instances of Positive, Odds, Unique, and Sorted:
interface Diff {
Iterable<Integer> apply(Iterable<Integer> origin);
}For the user of numbers, both approaches are the same. The difference is only in the design. Which one is better and when? It seems that vertical decorating is easier to implement and is more suitable for smaller objects that expose just a few methods.
As for my experience, I always tend to start with vertical decorating since it’s easier to implement but eventually migrate to a horizontal one when the number of decorators starts to grow.
Please, use syntax highlighting in your comments, to make them more readable.

Here, variable fileName is redundant:
String fileName = "test.txt";
print("Length is " + new File(fileName).length());This code must look differently:
print("Length is " + new File("test.txt").length());This example is very primitive, but I’m sure you’ve seen these redundant variables many times. We use them to “explain” the code—it’s not just a string literal "test.txt" anymore but a fileName. The code looks easier to understand, right? Not really.
Let’s dig into what “readability” of code is in the first place. I think this quality can be measured by the number of seconds I need to understand the code I’m looking at. The longer the timeframe, the lower the readability. Ideally, I want to understand any piece of code in a few seconds. If I can’t, that’s a failure of its author.
Remember, if I don’t understand you, it’s your fault.
An increasing length of code degrades readability. So the more variable names I have to remember while reading through it, the longer it takes to digest the code and come to a conclusion about its purpose and effects. I think four is the maximum number of variables I can comfortably keep in my head without thinking about quitting the job.
New variables make the code longer because they need extra lines to be declared. And they make the code more complex because its reader has to remember more names.
Thus, when you want to introduce a new variable to explain what your code is doing, stop and think. Your code is too complex and long in the first place! Refactor it using new objects or methods but not variables. Make your code shorter by moving pieces of it into new classes or private methods.
Moreover, I think that in perfectly designed methods, you won’t need any variables aside from method arguments.
" />
Here, variable fileName is redundant:
String fileName = "test.txt";
print("Length is " + new File(fileName).length());This code must look differently:
print("Length is " + new File("test.txt").length());This example is very primitive, but I’m sure you’ve seen these redundant variables many times. We use them to “explain” the code—it’s not just a string literal "test.txt" anymore but a fileName. The code looks easier to understand, right? Not really.
Let’s dig into what “readability” of code is in the first place. I think this quality can be measured by the number of seconds I need to understand the code I’m looking at. The longer the timeframe, the lower the readability. Ideally, I want to understand any piece of code in a few seconds. If I can’t, that’s a failure of its author.
Remember, if I don’t understand you, it’s your fault.
An increasing length of code degrades readability. So the more variable names I have to remember while reading through it, the longer it takes to digest the code and come to a conclusion about its purpose and effects. I think four is the maximum number of variables I can comfortably keep in my head without thinking about quitting the job.
New variables make the code longer because they need extra lines to be declared. And they make the code more complex because its reader has to remember more names.
Thus, when you want to introduce a new variable to explain what your code is doing, stop and think. Your code is too complex and long in the first place! Refactor it using new objects or methods but not variables. Make your code shorter by moving pieces of it into new classes or private methods.
Moreover, I think that in perfectly designed methods, you won’t need any variables aside from method arguments.
"/>
https://www.yegor256.com/2015/09/01/redundant-variables-are-evil.html
Redundant Variables Are Pure Evil
- Kiev, Ukraine
- Yegor Bugayenko
- comments
A redundant variable is one that exists exclusively to explain its value. I strongly believe that such a variable is not only pure noise but also evil, with a very negative effect on code readability. When we introduce a redundant variable, we intend to make our code cleaner and easier to read. In reality, though, we make it more verbose and difficult to understand. Without exception, any variable used only once is redundant and must be replaced with a value.

Here, variable fileName is redundant:
String fileName = "test.txt";
print("Length is " + new File(fileName).length());This code must look differently:
print("Length is " + new File("test.txt").length());This example is very primitive, but I’m sure you’ve seen these redundant variables many times. We use them to “explain” the code—it’s not just a string literal "test.txt" anymore but a fileName. The code looks easier to understand, right? Not really.
Let’s dig into what “readability” of code is in the first place. I think this quality can be measured by the number of seconds I need to understand the code I’m looking at. The longer the timeframe, the lower the readability. Ideally, I want to understand any piece of code in a few seconds. If I can’t, that’s a failure of its author.
Remember, if I don’t understand you, it’s your fault.
An increasing length of code degrades readability. So the more variable names I have to remember while reading through it, the longer it takes to digest the code and come to a conclusion about its purpose and effects. I think four is the maximum number of variables I can comfortably keep in my head without thinking about quitting the job.
New variables make the code longer because they need extra lines to be declared. And they make the code more complex because its reader has to remember more names.
Thus, when you want to introduce a new variable to explain what your code is doing, stop and think. Your code is too complex and long in the first place! Refactor it using new objects or methods but not variables. Make your code shorter by moving pieces of it into new classes or private methods.
Moreover, I think that in perfectly designed methods, you won’t need any variables aside from method arguments.
A redundant variable is one that exists exclusively to explain its value. I strongly believe that such a variable is not only pure noise but also evil, with a very negative effect on code readability. When we introduce a redundant variable, we intend to make our code cleaner and easier to read. In reality, though, we make it more verbose and difficult to understand. Without exception, any variable used only once is redundant and must be replaced with a value.

Here, variable fileName is redundant:
String fileName = "test.txt";
print("Length is " + new File(fileName).length());This code must look differently:
print("Length is " + new File("test.txt").length());This example is very primitive, but I’m sure you’ve seen these redundant variables many times. We use them to “explain” the code—it’s not just a string literal "test.txt" anymore but a fileName. The code looks easier to understand, right? Not really.
Let’s dig into what “readability” of code is in the first place. I think this quality can be measured by the number of seconds I need to understand the code I’m looking at. The longer the timeframe, the lower the readability. Ideally, I want to understand any piece of code in a few seconds. If I can’t, that’s a failure of its author.
Remember, if I don’t understand you, it’s your fault.
An increasing length of code degrades readability. So the more variable names I have to remember while reading through it, the longer it takes to digest the code and come to a conclusion about its purpose and effects. I think four is the maximum number of variables I can comfortably keep in my head without thinking about quitting the job.
New variables make the code longer because they need extra lines to be declared. And they make the code more complex because its reader has to remember more names.
Thus, when you want to introduce a new variable to explain what your code is doing, stop and think. Your code is too complex and long in the first place! Refactor it using new objects or methods but not variables. Make your code shorter by moving pieces of it into new classes or private methods.
Moreover, I think that in perfectly designed methods, you won’t need any variables aside from method arguments.
Please, use syntax highlighting in your comments, to make them more readable.
return statements or always just one. The answer may surprise you: In a pure object-oriented world, a method must have a single return statement and nothing else. Yes, just a return statement and that’s it. No other operators or statements. Just return. All arguments in favor of multiple return statements go against the very idea of object-oriented programming.This is a classical example:
public int max(int a, int b) {
if (a > b) {
return a;
}
return b;
}The code above has two return statements, and it is shorter than this one with a single return:
public int max(int a, int b) {
int m;
if (a > b) {
m = a;
} else {
m = b;
}
return m;
}More verbose, less readable, and slower, right? Right.
This is the code in a pure object-oriented world:
public int max(int a, int b) {
return new If(
new GreaterThan(a, b),
a, b
);
}What do you think now? There are no statements or operators. No if and no >. Instead, there are objects of class If and GreaterThan.
This is a pure and clean object-oriented approach.
However, Java doesn’t have that. Java (and many other pseudo OOP languages) gives us operators like if, else, switch, for, while, etc. instead of giving built-in classes, which would do the same. Because of that, we continue to think in terms of procedures and keep talking about whether two return statements are better than one.
If your code is truly object-oriented, you won’t be able to have more than one return. Moreover, you will have nothing except a return in each method. Actually, you will have only two operators in the entire software—new and return. That’s it.
Until we’re there, let’s stick with just one return and at least try to look like pure OOP.
return statements or always just one. The answer may surprise you: In a pure object-oriented world, a method must have a single return statement and nothing else. Yes, just a return statement and that’s it. No other operators or statements. Just return. All arguments in favor of multiple return statements go against the very idea of object-oriented programming.This is a classical example:
public int max(int a, int b) {
if (a > b) {
return a;
}
return b;
}The code above has two return statements, and it is shorter than this one with a single return:
public int max(int a, int b) {
int m;
if (a > b) {
m = a;
} else {
m = b;
}
return m;
}More verbose, less readable, and slower, right? Right.
This is the code in a pure object-oriented world:
public int max(int a, int b) {
return new If(
new GreaterThan(a, b),
a, b
);
}What do you think now? There are no statements or operators. No if and no >. Instead, there are objects of class If and GreaterThan.
This is a pure and clean object-oriented approach.
However, Java doesn’t have that. Java (and many other pseudo OOP languages) gives us operators like if, else, switch, for, while, etc. instead of giving built-in classes, which would do the same. Because of that, we continue to think in terms of procedures and keep talking about whether two return statements are better than one.
If your code is truly object-oriented, you won’t be able to have more than one return. Moreover, you will have nothing except a return in each method. Actually, you will have only two operators in the entire software—new and return. That’s it.
Until we’re there, let’s stick with just one return and at least try to look like pure OOP.
https://www.yegor256.com/2015/08/18/multiple-return-statements-in-oop.html
Why Many Return Statements Are a Bad Idea in OOP
- Kiev, Ukraine
- Yegor Bugayenko
- comments
This debate is very old, but I have something to say too. The question is whether a method may have multiple return statements or always just one. The answer may surprise you: In a pure object-oriented world, a method must have a single return statement and nothing else. Yes, just a return statement and that’s it. No other operators or statements. Just return. All arguments in favor of multiple return statements go against the very idea of object-oriented programming.
This is a classical example:
public int max(int a, int b) {
if (a > b) {
return a;
}
return b;
}The code above has two return statements, and it is shorter than this one with a single return:
public int max(int a, int b) {
int m;
if (a > b) {
m = a;
} else {
m = b;
}
return m;
}More verbose, less readable, and slower, right? Right.
This is the code in a pure object-oriented world:
public int max(int a, int b) {
return new If(
new GreaterThan(a, b),
a, b
);
}What do you think now? There are no statements or operators. No if and no >. Instead, there are objects of class If and GreaterThan.
This is a pure and clean object-oriented approach.
However, Java doesn’t have that. Java (and many other pseudo OOP languages) gives us operators like if, else, switch, for, while, etc. instead of giving built-in classes, which would do the same. Because of that, we continue to think in terms of procedures and keep talking about whether two return statements are better than one.
If your code is truly object-oriented, you won’t be able to have more than one return. Moreover, you will have nothing except a return in each method. Actually, you will have only two operators in the entire software—new and return. That’s it.
Until we’re there, let’s stick with just one return and at least try to look like pure OOP.
This debate is very old, but I have something to say too. The question is whether a method may have multiple return statements or always just one. The answer may surprise you: In a pure object-oriented world, a method must have a single return statement and nothing else. Yes, just a return statement and that’s it. No other operators or statements. Just return. All arguments in favor of multiple return statements go against the very idea of object-oriented programming.
This is a classical example:
public int max(int a, int b) {
if (a > b) {
return a;
}
return b;
}The code above has two return statements, and it is shorter than this one with a single return:
public int max(int a, int b) {
int m;
if (a > b) {
m = a;
} else {
m = b;
}
return m;
}More verbose, less readable, and slower, right? Right.
This is the code in a pure object-oriented world:
public int max(int a, int b) {
return new If(
new GreaterThan(a, b),
a, b
);
}What do you think now? There are no statements or operators. No if and no >. Instead, there are objects of class If and GreaterThan.
This is a pure and clean object-oriented approach.
However, Java doesn’t have that. Java (and many other pseudo OOP languages) gives us operators like if, else, switch, for, while, etc. instead of giving built-in classes, which would do the same. Because of that, we continue to think in terms of procedures and keep talking about whether two return statements are better than one.
If your code is truly object-oriented, you won’t be able to have more than one return. Moreover, you will have nothing except a return in each method. Actually, you will have only two operators in the entire software—new and return. That’s it.
Until we’re there, let’s stick with just one return and at least try to look like pure OOP.
Please, use syntax highlighting in your comments, to make them more readable.

Let me first explain how I understand exceptions in object-oriented programming. Then I’ll compare my understanding with a “traditional” approach, and we’ll discuss the differences. So, my understanding first.
Say there is a method that saves some binary data to a file:
public void save(File file, byte[] data)
throws Exception {
// save data to the file
}When everything goes right, the method just saves the data and returns control. When something is wrong, it throws Exception and we have to do something about it:
try {
save(file, data);
} catch (Exception ex) {
System.out.println("Sorry, we can't save right now.");
}When a method says it throws an exception, I understand that the method is not safe. It may fail sometimes, and it’s my responsibility to either 1) handle this failure or 2) declare myself as unsafe too.
I know each method is designed with a single responsibility principle in mind. This is a guarantee to me that if method save() fails, it means the entire saving operation can’t be completed. If I need to know what the cause of this failure was, I will un-chain the exception—traverse the stack of chained exceptions and stack traces encapsulated in ex.
I never use exceptions for flow control, which means I never recover situations where exceptions are thrown. When an exception occurs, I let it float up to the highest level of the application. Sometimes I rethrow it in order to add more semantic information to the chain. That’s why it doesn’t matter to me what the cause of the exception thrown by save() was. I just know the method failed. That’s enough for me. Always.
For the same reason, I don’t need to differentiate between different exception types. I just don’t need that type of hierarchy. Exception is enough for me. Again, that’s because I don’t use exceptions for flow control.
That’s how I understand exceptions.
According to this paradigm, I would say we must:
- Always use checked exceptions.
- Never throw/use unchecked exceptions.
- Use only
Exception, without any sub-types. - Always declare one exception type in the
throwsblock. - Never catch without rethrowing; read more about that here.
This paradigm diverges from many other articles I’ve found on this subject. Let’s compare and discuss.
Runtime vs. API Exceptions
Oracle says some exceptions should be part of API (checked ones) while some are runtime exceptions and should not be part of it (unchecked). They will be documented in JavaDoc but not in the method signature.
I don’t understand the logic here, and I’m sure Java designers don’t understand it either. How and why are some exceptions important while others are not? Why do some of them deserve a proper API position in the throws block of the method signature while others don’t? What is the criteria?
I have an answer here, though. By introducing checked and unchecked exceptions, Java developers tried to solve the problem of methods that are too complex and messy. When a method is too big and does too many things at the same time (violates the single responsibility principle), it’s definitely better to let us keep some exceptions “hidden” (a.k.a. unchecked). But it’s not a real solution. It is only a temporary patch that does all of us more harm than good—methods keep growing in size and complexity.
Unchecked exceptions are a mistake in Java design, not checked ones.
Hiding the fact that a method may fail at some point is a mistake. That’s exactly what unchecked exceptions do.
Instead, we should make this fact visible. When a method does too many things, there will be too many points of failure, and the author of the method will realize that something is wrong—a method should not throw exceptions in so many situations. This will lead to refactoring. The existence of unchecked exceptions leads to a mess. By the way, checked exceptions don’t exist at all in Ruby, C#, Python, PHP, etc. This means that creators of these languages understand OOP even less than Java authors.
Checked Exceptions Are Too Noisy
Another common argument against checked exceptions is that they make our code more verbose. We have to put try/catch everywhere instead of staying focused on the main logic. Bozhidar Bozhanov even suggests a technical solution for this verbosity problem.
Again, I don’t understand this logic. If I want to do something when method save() fails, I catch the exception and handle the situation somehow. If I don’t want to do that, I just say my method also throws and pay no attention to exception handling. What is the problem? Where is the verbosity coming from?
I have an answer here, too. It’s coming from the existence of unchecked exceptions. We simply can’t always ignore failure, because the interfaces we’re using don’t allow us to do this. That’s all. For example, class Runnable, which is widely used for multi-thread programming, has method run() that is not supposed to throw anything. That’s why we always have to catch everything inside the method and rethrow checked exceptions as unchecked.
If all methods in all Java interfaces would be declared either as “safe” (throws nothing) or “unsafe” (throws Exception), everything would become logical and clear. If you want to stay “safe,” take responsibility for failure handling. Otherwise, be “unsafe” and let your users worry about safety.
No noise, very clean code, and obvious logic.
Inappropriately Exposed Implementation Details
Some say the ability to put a checked exception into throws in the method signature instead of catching it here and rethrowing a new type encourages us to have too many irrelevant exception types in method signatures. For example, our method save() may declare that it may throw OutOfMemoryException, even though it seems to have nothing to do with memory allocation. But it does allocate some memory, right? So such a memory overflow may happen during a file saving operation.
Yet again, I don’t get the logic of this argument. If all exceptions are checked, and we don’t have multiple exception types, we just throw Exception everywhere, and that’s it. Why do we need to care about the exception type in the first place? If we don’t use exceptions to control flow, we won’t do this.
If we really want to make our application memory overflow-resistant, we will introduce some memory manager, which will have something like the bigEnough() method, which will tell us whether our heap is big enough for the next operation. Using exceptions in such situations is a totally inappropriate approach to exception management in OOP.
Recoverable Exceptions
Joshua Bloch, in Effective Java, says to “use checked exceptions for recoverable conditions and runtime exceptions for programming errors.” He means something like this:
try {
save(file, data);
} catch (Exception ex) {
// We can't save the file, but it's OK
// Let's move on and do something else
}How is that any different from a famous anti-pattern called Don’t Use Exceptions for Flow Control? Joshua, with all due respect, you’re wrong. There are no such things as recoverable conditions in OOP. An exception indicates that the execution of a chain of calls from method to method is broken, and it’s time to go up through the chain and stop somewhere. But we never go back again after the exception:
App#run()
Data#update()
Data#write()
File#save() <-- Boom, there's a failure here, so we go upWe can start this chain again, but we don’t go back after throw. In other words, we don’t do anything in the catch block. We only report the problem and wrap up execution. We never “recover!”
All arguments against checked exceptions demonstrate nothing but a serious misunderstanding of object-oriented programming by their authors. The mistake in Java and in many other languages is the existence of unchecked exceptions, not checked ones.
" /> debate is over, isn’t it? Not for me. While most object-oriented languages don’t have them, and most programmers think checked exceptions are a Java mistake, I believe in the opposite—unchecked exceptions are the mistake. Moreover, I believe multiple exception types are a bad idea too.
Let me first explain how I understand exceptions in object-oriented programming. Then I’ll compare my understanding with a “traditional” approach, and we’ll discuss the differences. So, my understanding first.
Say there is a method that saves some binary data to a file:
public void save(File file, byte[] data)
throws Exception {
// save data to the file
}When everything goes right, the method just saves the data and returns control. When something is wrong, it throws Exception and we have to do something about it:
try {
save(file, data);
} catch (Exception ex) {
System.out.println("Sorry, we can't save right now.");
}When a method says it throws an exception, I understand that the method is not safe. It may fail sometimes, and it’s my responsibility to either 1) handle this failure or 2) declare myself as unsafe too.
I know each method is designed with a single responsibility principle in mind. This is a guarantee to me that if method save() fails, it means the entire saving operation can’t be completed. If I need to know what the cause of this failure was, I will un-chain the exception—traverse the stack of chained exceptions and stack traces encapsulated in ex.
I never use exceptions for flow control, which means I never recover situations where exceptions are thrown. When an exception occurs, I let it float up to the highest level of the application. Sometimes I rethrow it in order to add more semantic information to the chain. That’s why it doesn’t matter to me what the cause of the exception thrown by save() was. I just know the method failed. That’s enough for me. Always.
For the same reason, I don’t need to differentiate between different exception types. I just don’t need that type of hierarchy. Exception is enough for me. Again, that’s because I don’t use exceptions for flow control.
That’s how I understand exceptions.
According to this paradigm, I would say we must:
- Always use checked exceptions.
- Never throw/use unchecked exceptions.
- Use only
Exception, without any sub-types. - Always declare one exception type in the
throwsblock. - Never catch without rethrowing; read more about that here.
This paradigm diverges from many other articles I’ve found on this subject. Let’s compare and discuss.
Runtime vs. API Exceptions
Oracle says some exceptions should be part of API (checked ones) while some are runtime exceptions and should not be part of it (unchecked). They will be documented in JavaDoc but not in the method signature.
I don’t understand the logic here, and I’m sure Java designers don’t understand it either. How and why are some exceptions important while others are not? Why do some of them deserve a proper API position in the throws block of the method signature while others don’t? What is the criteria?
I have an answer here, though. By introducing checked and unchecked exceptions, Java developers tried to solve the problem of methods that are too complex and messy. When a method is too big and does too many things at the same time (violates the single responsibility principle), it’s definitely better to let us keep some exceptions “hidden” (a.k.a. unchecked). But it’s not a real solution. It is only a temporary patch that does all of us more harm than good—methods keep growing in size and complexity.
Unchecked exceptions are a mistake in Java design, not checked ones.
Hiding the fact that a method may fail at some point is a mistake. That’s exactly what unchecked exceptions do.
Instead, we should make this fact visible. When a method does too many things, there will be too many points of failure, and the author of the method will realize that something is wrong—a method should not throw exceptions in so many situations. This will lead to refactoring. The existence of unchecked exceptions leads to a mess. By the way, checked exceptions don’t exist at all in Ruby, C#, Python, PHP, etc. This means that creators of these languages understand OOP even less than Java authors.
Checked Exceptions Are Too Noisy
Another common argument against checked exceptions is that they make our code more verbose. We have to put try/catch everywhere instead of staying focused on the main logic. Bozhidar Bozhanov even suggests a technical solution for this verbosity problem.
Again, I don’t understand this logic. If I want to do something when method save() fails, I catch the exception and handle the situation somehow. If I don’t want to do that, I just say my method also throws and pay no attention to exception handling. What is the problem? Where is the verbosity coming from?
I have an answer here, too. It’s coming from the existence of unchecked exceptions. We simply can’t always ignore failure, because the interfaces we’re using don’t allow us to do this. That’s all. For example, class Runnable, which is widely used for multi-thread programming, has method run() that is not supposed to throw anything. That’s why we always have to catch everything inside the method and rethrow checked exceptions as unchecked.
If all methods in all Java interfaces would be declared either as “safe” (throws nothing) or “unsafe” (throws Exception), everything would become logical and clear. If you want to stay “safe,” take responsibility for failure handling. Otherwise, be “unsafe” and let your users worry about safety.
No noise, very clean code, and obvious logic.
Inappropriately Exposed Implementation Details
Some say the ability to put a checked exception into throws in the method signature instead of catching it here and rethrowing a new type encourages us to have too many irrelevant exception types in method signatures. For example, our method save() may declare that it may throw OutOfMemoryException, even though it seems to have nothing to do with memory allocation. But it does allocate some memory, right? So such a memory overflow may happen during a file saving operation.
Yet again, I don’t get the logic of this argument. If all exceptions are checked, and we don’t have multiple exception types, we just throw Exception everywhere, and that’s it. Why do we need to care about the exception type in the first place? If we don’t use exceptions to control flow, we won’t do this.
If we really want to make our application memory overflow-resistant, we will introduce some memory manager, which will have something like the bigEnough() method, which will tell us whether our heap is big enough for the next operation. Using exceptions in such situations is a totally inappropriate approach to exception management in OOP.
Recoverable Exceptions
Joshua Bloch, in Effective Java, says to “use checked exceptions for recoverable conditions and runtime exceptions for programming errors.” He means something like this:
try {
save(file, data);
} catch (Exception ex) {
// We can't save the file, but it's OK
// Let's move on and do something else
}How is that any different from a famous anti-pattern called Don’t Use Exceptions for Flow Control? Joshua, with all due respect, you’re wrong. There are no such things as recoverable conditions in OOP. An exception indicates that the execution of a chain of calls from method to method is broken, and it’s time to go up through the chain and stop somewhere. But we never go back again after the exception:
App#run()
Data#update()
Data#write()
File#save() <-- Boom, there's a failure here, so we go upWe can start this chain again, but we don’t go back after throw. In other words, we don’t do anything in the catch block. We only report the problem and wrap up execution. We never “recover!”
All arguments against checked exceptions demonstrate nothing but a serious misunderstanding of object-oriented programming by their authors. The mistake in Java and in many other languages is the existence of unchecked exceptions, not checked ones.
"/>
https://www.yegor256.com/2015/07/28/checked-vs-unchecked-exceptions.html
Checked vs. Unchecked Exceptions: The Debate Is Not Over
- Sunnyvale, CA
- Yegor Bugayenko
- comments
Do we need checked exceptions at all? The debate is over, isn’t it? Not for me. While most object-oriented languages don’t have them, and most programmers think checked exceptions are a Java mistake, I believe in the opposite—unchecked exceptions are the mistake. Moreover, I believe multiple exception types are a bad idea too.

Let me first explain how I understand exceptions in object-oriented programming. Then I’ll compare my understanding with a “traditional” approach, and we’ll discuss the differences. So, my understanding first.
Say there is a method that saves some binary data to a file:
public void save(File file, byte[] data)
throws Exception {
// save data to the file
}When everything goes right, the method just saves the data and returns control. When something is wrong, it throws Exception and we have to do something about it:
try {
save(file, data);
} catch (Exception ex) {
System.out.println("Sorry, we can't save right now.");
}When a method says it throws an exception, I understand that the method is not safe. It may fail sometimes, and it’s my responsibility to either 1) handle this failure or 2) declare myself as unsafe too.
I know each method is designed with a single responsibility principle in mind. This is a guarantee to me that if method save() fails, it means the entire saving operation can’t be completed. If I need to know what the cause of this failure was, I will un-chain the exception—traverse the stack of chained exceptions and stack traces encapsulated in ex.
I never use exceptions for flow control, which means I never recover situations where exceptions are thrown. When an exception occurs, I let it float up to the highest level of the application. Sometimes I rethrow it in order to add more semantic information to the chain. That’s why it doesn’t matter to me what the cause of the exception thrown by save() was. I just know the method failed. That’s enough for me. Always.
For the same reason, I don’t need to differentiate between different exception types. I just don’t need that type of hierarchy. Exception is enough for me. Again, that’s because I don’t use exceptions for flow control.
That’s how I understand exceptions.
According to this paradigm, I would say we must:
- Always use checked exceptions.
- Never throw/use unchecked exceptions.
- Use only
Exception, without any sub-types. - Always declare one exception type in the
throwsblock. - Never catch without rethrowing; read more about that here.
This paradigm diverges from many other articles I’ve found on this subject. Let’s compare and discuss.
Runtime vs. API Exceptions
Oracle says some exceptions should be part of API (checked ones) while some are runtime exceptions and should not be part of it (unchecked). They will be documented in JavaDoc but not in the method signature.
I don’t understand the logic here, and I’m sure Java designers don’t understand it either. How and why are some exceptions important while others are not? Why do some of them deserve a proper API position in the throws block of the method signature while others don’t? What is the criteria?
I have an answer here, though. By introducing checked and unchecked exceptions, Java developers tried to solve the problem of methods that are too complex and messy. When a method is too big and does too many things at the same time (violates the single responsibility principle), it’s definitely better to let us keep some exceptions “hidden” (a.k.a. unchecked). But it’s not a real solution. It is only a temporary patch that does all of us more harm than good—methods keep growing in size and complexity.
Unchecked exceptions are a mistake in Java design, not checked ones.
Hiding the fact that a method may fail at some point is a mistake. That’s exactly what unchecked exceptions do.
Instead, we should make this fact visible. When a method does too many things, there will be too many points of failure, and the author of the method will realize that something is wrong—a method should not throw exceptions in so many situations. This will lead to refactoring. The existence of unchecked exceptions leads to a mess. By the way, checked exceptions don’t exist at all in Ruby, C#, Python, PHP, etc. This means that creators of these languages understand OOP even less than Java authors.
Checked Exceptions Are Too Noisy
Another common argument against checked exceptions is that they make our code more verbose. We have to put try/catch everywhere instead of staying focused on the main logic. Bozhidar Bozhanov even suggests a technical solution for this verbosity problem.
Again, I don’t understand this logic. If I want to do something when method save() fails, I catch the exception and handle the situation somehow. If I don’t want to do that, I just say my method also throws and pay no attention to exception handling. What is the problem? Where is the verbosity coming from?
I have an answer here, too. It’s coming from the existence of unchecked exceptions. We simply can’t always ignore failure, because the interfaces we’re using don’t allow us to do this. That’s all. For example, class Runnable, which is widely used for multi-thread programming, has method run() that is not supposed to throw anything. That’s why we always have to catch everything inside the method and rethrow checked exceptions as unchecked.
If all methods in all Java interfaces would be declared either as “safe” (throws nothing) or “unsafe” (throws Exception), everything would become logical and clear. If you want to stay “safe,” take responsibility for failure handling. Otherwise, be “unsafe” and let your users worry about safety.
No noise, very clean code, and obvious logic.
Inappropriately Exposed Implementation Details
Some say the ability to put a checked exception into throws in the method signature instead of catching it here and rethrowing a new type encourages us to have too many irrelevant exception types in method signatures. For example, our method save() may declare that it may throw OutOfMemoryException, even though it seems to have nothing to do with memory allocation. But it does allocate some memory, right? So such a memory overflow may happen during a file saving operation.
Yet again, I don’t get the logic of this argument. If all exceptions are checked, and we don’t have multiple exception types, we just throw Exception everywhere, and that’s it. Why do we need to care about the exception type in the first place? If we don’t use exceptions to control flow, we won’t do this.
If we really want to make our application memory overflow-resistant, we will introduce some memory manager, which will have something like the bigEnough() method, which will tell us whether our heap is big enough for the next operation. Using exceptions in such situations is a totally inappropriate approach to exception management in OOP.
Recoverable Exceptions
Joshua Bloch, in Effective Java, says to “use checked exceptions for recoverable conditions and runtime exceptions for programming errors.” He means something like this:
try {
save(file, data);
} catch (Exception ex) {
// We can't save the file, but it's OK
// Let's move on and do something else
}How is that any different from a famous anti-pattern called Don’t Use Exceptions for Flow Control? Joshua, with all due respect, you’re wrong. There are no such things as recoverable conditions in OOP. An exception indicates that the execution of a chain of calls from method to method is broken, and it’s time to go up through the chain and stop somewhere. But we never go back again after the exception:
App#run()
Data#update()
Data#write()
File#save() <-- Boom, there's a failure here, so we go upWe can start this chain again, but we don’t go back after throw. In other words, we don’t do anything in the catch block. We only report the problem and wrap up execution. We never “recover!”
All arguments against checked exceptions demonstrate nothing but a serious misunderstanding of object-oriented programming by their authors. The mistake in Java and in many other languages is the existence of unchecked exceptions, not checked ones.
Do we need checked exceptions at all? The debate is over, isn’t it? Not for me. While most object-oriented languages don’t have them, and most programmers think checked exceptions are a Java mistake, I believe in the opposite—unchecked exceptions are the mistake. Moreover, I believe multiple exception types are a bad idea too.

Let me first explain how I understand exceptions in object-oriented programming. Then I’ll compare my understanding with a “traditional” approach, and we’ll discuss the differences. So, my understanding first.
Say there is a method that saves some binary data to a file:
public void save(File file, byte[] data)
throws Exception {
// save data to the file
}When everything goes right, the method just saves the data and returns control. When something is wrong, it throws Exception and we have to do something about it:
try {
save(file, data);
} catch (Exception ex) {
System.out.println("Sorry, we can't save right now.");
}When a method says it throws an exception, I understand that the method is not safe. It may fail sometimes, and it’s my responsibility to either 1) handle this failure or 2) declare myself as unsafe too.
I know each method is designed with a single responsibility principle in mind. This is a guarantee to me that if method save() fails, it means the entire saving operation can’t be completed. If I need to know what the cause of this failure was, I will un-chain the exception—traverse the stack of chained exceptions and stack traces encapsulated in ex.
I never use exceptions for flow control, which means I never recover situations where exceptions are thrown. When an exception occurs, I let it float up to the highest level of the application. Sometimes I rethrow it in order to add more semantic information to the chain. That’s why it doesn’t matter to me what the cause of the exception thrown by save() was. I just know the method failed. That’s enough for me. Always.
For the same reason, I don’t need to differentiate between different exception types. I just don’t need that type of hierarchy. Exception is enough for me. Again, that’s because I don’t use exceptions for flow control.
That’s how I understand exceptions.
According to this paradigm, I would say we must:
- Always use checked exceptions.
- Never throw/use unchecked exceptions.
- Use only
Exception, without any sub-types. - Always declare one exception type in the
throwsblock. - Never catch without rethrowing; read more about that here.
This paradigm diverges from many other articles I’ve found on this subject. Let’s compare and discuss.
Runtime vs. API Exceptions
Oracle says some exceptions should be part of API (checked ones) while some are runtime exceptions and should not be part of it (unchecked). They will be documented in JavaDoc but not in the method signature.
I don’t understand the logic here, and I’m sure Java designers don’t understand it either. How and why are some exceptions important while others are not? Why do some of them deserve a proper API position in the throws block of the method signature while others don’t? What is the criteria?
I have an answer here, though. By introducing checked and unchecked exceptions, Java developers tried to solve the problem of methods that are too complex and messy. When a method is too big and does too many things at the same time (violates the single responsibility principle), it’s definitely better to let us keep some exceptions “hidden” (a.k.a. unchecked). But it’s not a real solution. It is only a temporary patch that does all of us more harm than good—methods keep growing in size and complexity.
Unchecked exceptions are a mistake in Java design, not checked ones.
Hiding the fact that a method may fail at some point is a mistake. That’s exactly what unchecked exceptions do.
Instead, we should make this fact visible. When a method does too many things, there will be too many points of failure, and the author of the method will realize that something is wrong—a method should not throw exceptions in so many situations. This will lead to refactoring. The existence of unchecked exceptions leads to a mess. By the way, checked exceptions don’t exist at all in Ruby, C#, Python, PHP, etc. This means that creators of these languages understand OOP even less than Java authors.
Checked Exceptions Are Too Noisy
Another common argument against checked exceptions is that they make our code more verbose. We have to put try/catch everywhere instead of staying focused on the main logic. Bozhidar Bozhanov even suggests a technical solution for this verbosity problem.
Again, I don’t understand this logic. If I want to do something when method save() fails, I catch the exception and handle the situation somehow. If I don’t want to do that, I just say my method also throws and pay no attention to exception handling. What is the problem? Where is the verbosity coming from?
I have an answer here, too. It’s coming from the existence of unchecked exceptions. We simply can’t always ignore failure, because the interfaces we’re using don’t allow us to do this. That’s all. For example, class Runnable, which is widely used for multi-thread programming, has method run() that is not supposed to throw anything. That’s why we always have to catch everything inside the method and rethrow checked exceptions as unchecked.
If all methods in all Java interfaces would be declared either as “safe” (throws nothing) or “unsafe” (throws Exception), everything would become logical and clear. If you want to stay “safe,” take responsibility for failure handling. Otherwise, be “unsafe” and let your users worry about safety.
No noise, very clean code, and obvious logic.
Inappropriately Exposed Implementation Details
Some say the ability to put a checked exception into throws in the method signature instead of catching it here and rethrowing a new type encourages us to have too many irrelevant exception types in method signatures. For example, our method save() may declare that it may throw OutOfMemoryException, even though it seems to have nothing to do with memory allocation. But it does allocate some memory, right? So such a memory overflow may happen during a file saving operation.
Yet again, I don’t get the logic of this argument. If all exceptions are checked, and we don’t have multiple exception types, we just throw Exception everywhere, and that’s it. Why do we need to care about the exception type in the first place? If we don’t use exceptions to control flow, we won’t do this.
If we really want to make our application memory overflow-resistant, we will introduce some memory manager, which will have something like the bigEnough() method, which will tell us whether our heap is big enough for the next operation. Using exceptions in such situations is a totally inappropriate approach to exception management in OOP.
Recoverable Exceptions
Joshua Bloch, in Effective Java, says to “use checked exceptions for recoverable conditions and runtime exceptions for programming errors.” He means something like this:
try {
save(file, data);
} catch (Exception ex) {
// We can't save the file, but it's OK
// Let's move on and do something else
}How is that any different from a famous anti-pattern called Don’t Use Exceptions for Flow Control? Joshua, with all due respect, you’re wrong. There are no such things as recoverable conditions in OOP. An exception indicates that the execution of a chain of calls from method to method is broken, and it’s time to go up through the chain and stop somewhere. But we never go back again after the exception:
App#run()
Data#update()
Data#write()
File#save() <-- Boom, there's a failure here, so we go upWe can start this chain again, but we don’t go back after throw. In other words, we don’t do anything in the catch block. We only report the problem and wrap up execution. We never “recover!”
All arguments against checked exceptions demonstrate nothing but a serious misunderstanding of object-oriented programming by their authors. The mistake in Java and in many other languages is the existence of unchecked exceptions, not checked ones.
Please, use syntax highlighting in your comments, to make them more readable.
try {
stream.write(data);
} catch (IOException ex) {
ex.printStackTrace();
}
Pay attention: I don’t have anything against this code:
try {
stream.write('X');
} catch (IOException ex) {
throw new IllegalStateException(ex);
}This is called exception chaining and is a perfectly valid construct.
So what is wrong with catching an exception and logging it? Let’s try to look at the bigger picture first. We’re talking about object-oriented programming—this means we’re dealing with objects. Here is how an object (its class, to be exact) would look:
final class Wire {
private final OutputStream stream;
Wire(final OutputStream stm) {
this.stream = stm;
}
public void send(final int data) {
try {
this.stream.write(x);
} catch (IOException ex) {
ex.printStackTrace();
}
}
}Here is how I’m using this class:
new Wire(stream).send(1);Looks nice, right? I don’t need to worry about that IOException when I’m calling send(1). It will be handled internally, and if it occurs, the stacktrace will be logged. But this is a totally wrong way of thinking, and it’s inherited from languages without exceptions, like C.
Exceptions were invented to simplify our design by moving the entire error handling code away from the main logic. Moreover, we’re not just moving it away but also concentrating it in one place—in the main() method, the entry point of the entire app.
The primary purpose of an exception is to collect as much information as possible about the error and float it up to the highest level, where the user is capable of doing something about it. Exception chaining helps even further by allowing us to extend that information on its way up. We are basically putting our bubble (the exception) into a bigger bubble every time we catch it and re-throw. When it hits the surface, there are many bubbles, each remaining inside another like a Russian doll. The original exception is the smallest bubble.
When you catch an exception without re-throwing it, you basically pop the bubble. Everything inside it, including the original exception and all other bubbles with the information inside them, are in your hands. You don’t let me see them. You use them somehow, but I don’t know how. You’re doing something behind the scenes, hiding potentially important information.
If you’re hiding that from me, I can’t promise my user that I will be honest with him and openly report a problem when it occurs. I simply can’t trust your send() method anymore, and my user will not trust me.
By catching exceptions without re-throwing them, you’re basically breaking the chain of trust between objects.
My suggestion is to catch exceptions as seldom as possible, and every time you catch them, re-throw.
Unfortunately, the design of Java goes against this principle in many places. For example, Java has checked and un-checked exceptions, while there should only be checked ones in my opinion (the ones you must catch or declare as throwable). Also, Java allows multiple exception types to be declared as throwable in a single method—yet another mistake; stick to declaring just one type. Also, there is a generic Exception class at the top of the hierarchy, which is also wrong in my opinion. Besides that, some built-in classes don’t allow any checked exceptions to be thrown, like Runnable.run(). There are many other problems with exceptions in Java.
But try to keep this principle in mind and your code will be cleaner: catch only if you have no other choice.
P.S. Here is how the class should look:
final class Wire {
private final OutputStream stream;
Wire(final OutputStream stm) {
this.stream = stm;
}
public void send(final int data)
throws IOException {
this.stream.write(x);
}
}try {
stream.write(data);
} catch (IOException ex) {
ex.printStackTrace();
}
Pay attention: I don’t have anything against this code:
try {
stream.write('X');
} catch (IOException ex) {
throw new IllegalStateException(ex);
}This is called exception chaining and is a perfectly valid construct.
So what is wrong with catching an exception and logging it? Let’s try to look at the bigger picture first. We’re talking about object-oriented programming—this means we’re dealing with objects. Here is how an object (its class, to be exact) would look:
final class Wire {
private final OutputStream stream;
Wire(final OutputStream stm) {
this.stream = stm;
}
public void send(final int data) {
try {
this.stream.write(x);
} catch (IOException ex) {
ex.printStackTrace();
}
}
}Here is how I’m using this class:
new Wire(stream).send(1);Looks nice, right? I don’t need to worry about that IOException when I’m calling send(1). It will be handled internally, and if it occurs, the stacktrace will be logged. But this is a totally wrong way of thinking, and it’s inherited from languages without exceptions, like C.
Exceptions were invented to simplify our design by moving the entire error handling code away from the main logic. Moreover, we’re not just moving it away but also concentrating it in one place—in the main() method, the entry point of the entire app.
The primary purpose of an exception is to collect as much information as possible about the error and float it up to the highest level, where the user is capable of doing something about it. Exception chaining helps even further by allowing us to extend that information on its way up. We are basically putting our bubble (the exception) into a bigger bubble every time we catch it and re-throw. When it hits the surface, there are many bubbles, each remaining inside another like a Russian doll. The original exception is the smallest bubble.
When you catch an exception without re-throwing it, you basically pop the bubble. Everything inside it, including the original exception and all other bubbles with the information inside them, are in your hands. You don’t let me see them. You use them somehow, but I don’t know how. You’re doing something behind the scenes, hiding potentially important information.
If you’re hiding that from me, I can’t promise my user that I will be honest with him and openly report a problem when it occurs. I simply can’t trust your send() method anymore, and my user will not trust me.
By catching exceptions without re-throwing them, you’re basically breaking the chain of trust between objects.
My suggestion is to catch exceptions as seldom as possible, and every time you catch them, re-throw.
Unfortunately, the design of Java goes against this principle in many places. For example, Java has checked and un-checked exceptions, while there should only be checked ones in my opinion (the ones you must catch or declare as throwable). Also, Java allows multiple exception types to be declared as throwable in a single method—yet another mistake; stick to declaring just one type. Also, there is a generic Exception class at the top of the hierarchy, which is also wrong in my opinion. Besides that, some built-in classes don’t allow any checked exceptions to be thrown, like Runnable.run(). There are many other problems with exceptions in Java.
But try to keep this principle in mind and your code will be cleaner: catch only if you have no other choice.
P.S. Here is how the class should look:
final class Wire {
private final OutputStream stream;
Wire(final OutputStream stm) {
this.stream = stm;
}
public void send(final int data)
throws IOException {
this.stream.write(x);
}
}
https://www.yegor256.com/2015/07/09/catch-if-cant-otherwise.html
Catch Me If You ... Can't Do Otherwise
- Dallas, TX
- Yegor Bugayenko
- comments
I don’t know whether it’s an anti-pattern or just a common and very popular mistake, but I see it everywhere and simply must write about it. I’m talking about exception catching without re-throwing. I’m talking about something like this Java code:
try {
stream.write(data);
} catch (IOException ex) {
ex.printStackTrace();
}
Pay attention: I don’t have anything against this code:
try {
stream.write('X');
} catch (IOException ex) {
throw new IllegalStateException(ex);
}This is called exception chaining and is a perfectly valid construct.
So what is wrong with catching an exception and logging it? Let’s try to look at the bigger picture first. We’re talking about object-oriented programming—this means we’re dealing with objects. Here is how an object (its class, to be exact) would look:
final class Wire {
private final OutputStream stream;
Wire(final OutputStream stm) {
this.stream = stm;
}
public void send(final int data) {
try {
this.stream.write(x);
} catch (IOException ex) {
ex.printStackTrace();
}
}
}Here is how I’m using this class:
new Wire(stream).send(1);Looks nice, right? I don’t need to worry about that IOException when I’m calling send(1). It will be handled internally, and if it occurs, the stacktrace will be logged. But this is a totally wrong way of thinking, and it’s inherited from languages without exceptions, like C.
Exceptions were invented to simplify our design by moving the entire error handling code away from the main logic. Moreover, we’re not just moving it away but also concentrating it in one place—in the main() method, the entry point of the entire app.
The primary purpose of an exception is to collect as much information as possible about the error and float it up to the highest level, where the user is capable of doing something about it. Exception chaining helps even further by allowing us to extend that information on its way up. We are basically putting our bubble (the exception) into a bigger bubble every time we catch it and re-throw. When it hits the surface, there are many bubbles, each remaining inside another like a Russian doll. The original exception is the smallest bubble.
When you catch an exception without re-throwing it, you basically pop the bubble. Everything inside it, including the original exception and all other bubbles with the information inside them, are in your hands. You don’t let me see them. You use them somehow, but I don’t know how. You’re doing something behind the scenes, hiding potentially important information.
If you’re hiding that from me, I can’t promise my user that I will be honest with him and openly report a problem when it occurs. I simply can’t trust your send() method anymore, and my user will not trust me.
By catching exceptions without re-throwing them, you’re basically breaking the chain of trust between objects.
My suggestion is to catch exceptions as seldom as possible, and every time you catch them, re-throw.
Unfortunately, the design of Java goes against this principle in many places. For example, Java has checked and un-checked exceptions, while there should only be checked ones in my opinion (the ones you must catch or declare as throwable). Also, Java allows multiple exception types to be declared as throwable in a single method—yet another mistake; stick to declaring just one type. Also, there is a generic Exception class at the top of the hierarchy, which is also wrong in my opinion. Besides that, some built-in classes don’t allow any checked exceptions to be thrown, like Runnable.run(). There are many other problems with exceptions in Java.
But try to keep this principle in mind and your code will be cleaner: catch only if you have no other choice.
P.S. Here is how the class should look:
final class Wire {
private final OutputStream stream;
Wire(final OutputStream stm) {
this.stream = stm;
}
public void send(final int data)
throws IOException {
this.stream.write(x);
}
}I don’t know whether it’s an anti-pattern or just a common and very popular mistake, but I see it everywhere and simply must write about it. I’m talking about exception catching without re-throwing. I’m talking about something like this Java code:
try {
stream.write(data);
} catch (IOException ex) {
ex.printStackTrace();
}
Pay attention: I don’t have anything against this code:
try {
stream.write('X');
} catch (IOException ex) {
throw new IllegalStateException(ex);
}This is called exception chaining and is a perfectly valid construct.
So what is wrong with catching an exception and logging it? Let’s try to look at the bigger picture first. We’re talking about object-oriented programming—this means we’re dealing with objects. Here is how an object (its class, to be exact) would look:
final class Wire {
private final OutputStream stream;
Wire(final OutputStream stm) {
this.stream = stm;
}
public void send(final int data) {
try {
this.stream.write(x);
} catch (IOException ex) {
ex.printStackTrace();
}
}
}Here is how I’m using this class:
new Wire(stream).send(1);Looks nice, right? I don’t need to worry about that IOException when I’m calling send(1). It will be handled internally, and if it occurs, the stacktrace will be logged. But this is a totally wrong way of thinking, and it’s inherited from languages without exceptions, like C.
Exceptions were invented to simplify our design by moving the entire error handling code away from the main logic. Moreover, we’re not just moving it away but also concentrating it in one place—in the main() method, the entry point of the entire app.
The primary purpose of an exception is to collect as much information as possible about the error and float it up to the highest level, where the user is capable of doing something about it. Exception chaining helps even further by allowing us to extend that information on its way up. We are basically putting our bubble (the exception) into a bigger bubble every time we catch it and re-throw. When it hits the surface, there are many bubbles, each remaining inside another like a Russian doll. The original exception is the smallest bubble.
When you catch an exception without re-throwing it, you basically pop the bubble. Everything inside it, including the original exception and all other bubbles with the information inside them, are in your hands. You don’t let me see them. You use them somehow, but I don’t know how. You’re doing something behind the scenes, hiding potentially important information.
If you’re hiding that from me, I can’t promise my user that I will be honest with him and openly report a problem when it occurs. I simply can’t trust your send() method anymore, and my user will not trust me.
By catching exceptions without re-throwing them, you’re basically breaking the chain of trust between objects.
My suggestion is to catch exceptions as seldom as possible, and every time you catch them, re-throw.
Unfortunately, the design of Java goes against this principle in many places. For example, Java has checked and un-checked exceptions, while there should only be checked ones in my opinion (the ones you must catch or declare as throwable). Also, Java allows multiple exception types to be declared as throwable in a single method—yet another mistake; stick to declaring just one type. Also, there is a generic Exception class at the top of the hierarchy, which is also wrong in my opinion. Besides that, some built-in classes don’t allow any checked exceptions to be thrown, like Runnable.run(). There are many other problems with exceptions in Java.
But try to keep this principle in mind and your code will be cleaner: catch only if you have no other choice.
P.S. Here is how the class should look:
final class Wire {
private final OutputStream stream;
Wire(final OutputStream stm) {
this.stream = stm;
}
public void send(final int data)
throws IOException {
this.stream.write(x);
}
}Please, use syntax highlighting in your comments, to make them more readable.
"UTF-8" constant in order to create a String from a byte array. It would be very convenient to define it once somewhere and reuse it, just like Apache Commons is doing; see CharEncoding.UTF_8 (There are many other static literals there). These guys are setting a bad example! public static “properties” are as bad as utility classes.
Here is what I’m talking about, specifically:
package org.apache.commons.lang3;
public class CharEncoding {
public static final String UTF_8 = "UTF-8";
// some other methods and properties
}Now, when I need to create a String from a byte array, I use this:
import org.apache.commons.lang3.CharEncoding;
String text = new String(array, CharEncoding.UTF_8);Let’s say I want to convert a String into a byte array:
import org.apache.commons.lang3.CharEncoding;
byte[] array = text.getBytes(CharEncoding.UTF_8);Looks convenient, right? This is what the designers of Apache Commons think (one of the most popular but simply terrible libraries in the Java world). I encourage you to think differently. I can’t tell you to stop using Apache Commons, because we just don’t have a better alternative (yet!). But in your own code, don’t use public static properties—ever. Even if this code may look convenient to you, it’s a very bad design.
The reason why is very similar to utility classes with public static methods—they are unbreakable hard-coded dependencies. Once you use that CharEncoding.UTF_8, your object starts to depend on this data, and its user (the user of your object) can’t break this dependency. You may say that this is your intention, in the case of a "UTF-8" constant—to make sure that Unicode is specifically and exclusively being used. In this particular example, this may be true, but look at it from a more global perspective.
Let me show you the alternative I have in mind before we continue. Here is what I’m suggesting instead to convert a byte array into a String:
String text = new UTF8String(array);It’s pseudo-code, since Java designers made class String final and we can’t really extend it and create UTF8String, but you get the idea. In the real world, this would look like this:
String text = new UTF8String(array).toString();As you see, we encapsulate the “UTF-8” constant somewhere inside the class UTF8String, and its users have no idea how exactly this “byte array to string” conversion is happening.
By introducing UTF8String, we solved the problem of “UTF-8” literal duplication. But we did it in a proper object-oriented way—we encapsulated the functionality inside a class and let everybody instantiate its objects and use them. We resolved the problem of functionality duplication, not just data duplication.
Placing data into one shared place (CharEncoding.UTF_8) doesn’t really solve the duplication problem; it actually makes it worse, mostly because it encourages everybody to duplicate functionality using the same piece of shared data.
My point here is that every time you see that you have some data duplication in your application, start thinking about the functionality you’re duplicating. You will easily find the code that is repeated again and again. Make a new class for this code and place the data there, as a private property (or private static property). That’s how you will improve your design and truly get rid of duplication.
PS. You can use a method instead of a class, but not a static literal.
" /> in one place and exactly the same code in another place in my app. Actually, I may have it in many places. And every time, I have to use that"UTF-8" constant in order to create a String from a byte array. It would be very convenient to define it once somewhere and reuse it, just like Apache Commons is doing; see CharEncoding.UTF_8 (There are many other static literals there). These guys are setting a bad example! public static “properties” are as bad as utility classes.
Here is what I’m talking about, specifically:
package org.apache.commons.lang3;
public class CharEncoding {
public static final String UTF_8 = "UTF-8";
// some other methods and properties
}Now, when I need to create a String from a byte array, I use this:
import org.apache.commons.lang3.CharEncoding;
String text = new String(array, CharEncoding.UTF_8);Let’s say I want to convert a String into a byte array:
import org.apache.commons.lang3.CharEncoding;
byte[] array = text.getBytes(CharEncoding.UTF_8);Looks convenient, right? This is what the designers of Apache Commons think (one of the most popular but simply terrible libraries in the Java world). I encourage you to think differently. I can’t tell you to stop using Apache Commons, because we just don’t have a better alternative (yet!). But in your own code, don’t use public static properties—ever. Even if this code may look convenient to you, it’s a very bad design.
The reason why is very similar to utility classes with public static methods—they are unbreakable hard-coded dependencies. Once you use that CharEncoding.UTF_8, your object starts to depend on this data, and its user (the user of your object) can’t break this dependency. You may say that this is your intention, in the case of a "UTF-8" constant—to make sure that Unicode is specifically and exclusively being used. In this particular example, this may be true, but look at it from a more global perspective.
Let me show you the alternative I have in mind before we continue. Here is what I’m suggesting instead to convert a byte array into a String:
String text = new UTF8String(array);It’s pseudo-code, since Java designers made class String final and we can’t really extend it and create UTF8String, but you get the idea. In the real world, this would look like this:
String text = new UTF8String(array).toString();As you see, we encapsulate the “UTF-8” constant somewhere inside the class UTF8String, and its users have no idea how exactly this “byte array to string” conversion is happening.
By introducing UTF8String, we solved the problem of “UTF-8” literal duplication. But we did it in a proper object-oriented way—we encapsulated the functionality inside a class and let everybody instantiate its objects and use them. We resolved the problem of functionality duplication, not just data duplication.
Placing data into one shared place (CharEncoding.UTF_8) doesn’t really solve the duplication problem; it actually makes it worse, mostly because it encourages everybody to duplicate functionality using the same piece of shared data.
My point here is that every time you see that you have some data duplication in your application, start thinking about the functionality you’re duplicating. You will easily find the code that is repeated again and again. Make a new class for this code and place the data there, as a private property (or private static property). That’s how you will improve your design and truly get rid of duplication.
PS. You can use a method instead of a class, but not a static literal.
"/>
https://www.yegor256.com/2015/07/06/public-static-literals.html
Public Static Literals ... Are Not a Solution for Data Duplication
- Palo Alto, CA
- Yegor Bugayenko
- comments
I have a new String(array,"UTF-8") in one place and exactly the same code in another place in my app. Actually, I may have it in many places. And every time, I have to use that "UTF-8" constant in order to create a String from a byte array. It would be very convenient to define it once somewhere and reuse it, just like Apache Commons is doing; see CharEncoding.UTF_8 (There are many other static literals there). These guys are setting a bad example! public static “properties” are as bad as utility classes.

Here is what I’m talking about, specifically:
package org.apache.commons.lang3;
public class CharEncoding {
public static final String UTF_8 = "UTF-8";
// some other methods and properties
}Now, when I need to create a String from a byte array, I use this:
import org.apache.commons.lang3.CharEncoding;
String text = new String(array, CharEncoding.UTF_8);Let’s say I want to convert a String into a byte array:
import org.apache.commons.lang3.CharEncoding;
byte[] array = text.getBytes(CharEncoding.UTF_8);Looks convenient, right? This is what the designers of Apache Commons think (one of the most popular but simply terrible libraries in the Java world). I encourage you to think differently. I can’t tell you to stop using Apache Commons, because we just don’t have a better alternative (yet!). But in your own code, don’t use public static properties—ever. Even if this code may look convenient to you, it’s a very bad design.
The reason why is very similar to utility classes with public static methods—they are unbreakable hard-coded dependencies. Once you use that CharEncoding.UTF_8, your object starts to depend on this data, and its user (the user of your object) can’t break this dependency. You may say that this is your intention, in the case of a "UTF-8" constant—to make sure that Unicode is specifically and exclusively being used. In this particular example, this may be true, but look at it from a more global perspective.
Let me show you the alternative I have in mind before we continue. Here is what I’m suggesting instead to convert a byte array into a String:
String text = new UTF8String(array);It’s pseudo-code, since Java designers made class String final and we can’t really extend it and create UTF8String, but you get the idea. In the real world, this would look like this:
String text = new UTF8String(array).toString();As you see, we encapsulate the “UTF-8” constant somewhere inside the class UTF8String, and its users have no idea how exactly this “byte array to string” conversion is happening.
By introducing UTF8String, we solved the problem of “UTF-8” literal duplication. But we did it in a proper object-oriented way—we encapsulated the functionality inside a class and let everybody instantiate its objects and use them. We resolved the problem of functionality duplication, not just data duplication.
Placing data into one shared place (CharEncoding.UTF_8) doesn’t really solve the duplication problem; it actually makes it worse, mostly because it encourages everybody to duplicate functionality using the same piece of shared data.
My point here is that every time you see that you have some data duplication in your application, start thinking about the functionality you’re duplicating. You will easily find the code that is repeated again and again. Make a new class for this code and place the data there, as a private property (or private static property). That’s how you will improve your design and truly get rid of duplication.
PS. You can use a method instead of a class, but not a static literal.
I have a new String(array,"UTF-8") in one place and exactly the same code in another place in my app. Actually, I may have it in many places. And every time, I have to use that "UTF-8" constant in order to create a String from a byte array. It would be very convenient to define it once somewhere and reuse it, just like Apache Commons is doing; see CharEncoding.UTF_8 (There are many other static literals there). These guys are setting a bad example! public static “properties” are as bad as utility classes.

Here is what I’m talking about, specifically:
package org.apache.commons.lang3;
public class CharEncoding {
public static final String UTF_8 = "UTF-8";
// some other methods and properties
}Now, when I need to create a String from a byte array, I use this:
import org.apache.commons.lang3.CharEncoding;
String text = new String(array, CharEncoding.UTF_8);Let’s say I want to convert a String into a byte array:
import org.apache.commons.lang3.CharEncoding;
byte[] array = text.getBytes(CharEncoding.UTF_8);Looks convenient, right? This is what the designers of Apache Commons think (one of the most popular but simply terrible libraries in the Java world). I encourage you to think differently. I can’t tell you to stop using Apache Commons, because we just don’t have a better alternative (yet!). But in your own code, don’t use public static properties—ever. Even if this code may look convenient to you, it’s a very bad design.
The reason why is very similar to utility classes with public static methods—they are unbreakable hard-coded dependencies. Once you use that CharEncoding.UTF_8, your object starts to depend on this data, and its user (the user of your object) can’t break this dependency. You may say that this is your intention, in the case of a "UTF-8" constant—to make sure that Unicode is specifically and exclusively being used. In this particular example, this may be true, but look at it from a more global perspective.
Let me show you the alternative I have in mind before we continue. Here is what I’m suggesting instead to convert a byte array into a String:
String text = new UTF8String(array);It’s pseudo-code, since Java designers made class String final and we can’t really extend it and create UTF8String, but you get the idea. In the real world, this would look like this:
String text = new UTF8String(array).toString();As you see, we encapsulate the “UTF-8” constant somewhere inside the class UTF8String, and its users have no idea how exactly this “byte array to string” conversion is happening.
By introducing UTF8String, we solved the problem of “UTF-8” literal duplication. But we did it in a proper object-oriented way—we encapsulated the functionality inside a class and let everybody instantiate its objects and use them. We resolved the problem of functionality duplication, not just data duplication.
Placing data into one shared place (CharEncoding.UTF_8) doesn’t really solve the duplication problem; it actually makes it worse, mostly because it encourages everybody to duplicate functionality using the same piece of shared data.
My point here is that every time you see that you have some data duplication in your application, start thinking about the functionality you’re duplicating. You will easily find the code that is repeated again and again. Make a new class for this code and place the data there, as a private property (or private static property). That’s how you will improve your design and truly get rid of duplication.
PS. You can use a method instead of a class, but not a static literal.
Please, use syntax highlighting in your comments, to make them more readable.

Here is what I mean:
final class Cash {
private final int cents;
private final String currency;
public Cash() { // secondary
this(0);
}
public Cash(int cts) { // secondary
this(cts, "USD");
}
public Cash(int cts, String crn) { // primary
this.cents = cts;
this.currency = crn;
}
// methods here
}There are three constructors in the class—only one is primary and the other two are secondary. My definition of a secondary constructor is simple: It doesn’t do anything besides calling a primary constructor, through this(..).
My point here is that a properly designed class must have only one primary constructor, and it should be declared after all secondary ones. Why? There is only one reason behind this rule: It helps eliminate code duplication.
Without such a rule, we may have this design for our class:
final class Cash {
private final int cents;
private final String currency;
public Cash() { // primary
this.cents = 0;
this.currency = "USD";
}
public Cash(int cts) { // primary
this.cents = cts;
this.currency = "USD";
}
public Cash(int cts, String crn) { // primary
this.cents = cts;
this.currency = crn;
}
// methods here
}There’s not a lot of code here, but the duplication is massive and ugly; I hope you see it for yourself.
By strictly following this suggested rule, all classes will have a single entry point (point of construction), which is a primary constructor, and it will always be easy to find because it stays below all secondary constructors.
More about this subject in Elegant Objects, Section 1.2.
" />
Here is what I mean:
final class Cash {
private final int cents;
private final String currency;
public Cash() { // secondary
this(0);
}
public Cash(int cts) { // secondary
this(cts, "USD");
}
public Cash(int cts, String crn) { // primary
this.cents = cts;
this.currency = crn;
}
// methods here
}There are three constructors in the class—only one is primary and the other two are secondary. My definition of a secondary constructor is simple: It doesn’t do anything besides calling a primary constructor, through this(..).
My point here is that a properly designed class must have only one primary constructor, and it should be declared after all secondary ones. Why? There is only one reason behind this rule: It helps eliminate code duplication.
Without such a rule, we may have this design for our class:
final class Cash {
private final int cents;
private final String currency;
public Cash() { // primary
this.cents = 0;
this.currency = "USD";
}
public Cash(int cts) { // primary
this.cents = cts;
this.currency = "USD";
}
public Cash(int cts, String crn) { // primary
this.cents = cts;
this.currency = crn;
}
// methods here
}There’s not a lot of code here, but the duplication is massive and ugly; I hope you see it for yourself.
By strictly following this suggested rule, all classes will have a single entry point (point of construction), which is a primary constructor, and it will always be easy to find because it stays below all secondary constructors.
More about this subject in Elegant Objects, Section 1.2.
"/>
https://www.yegor256.com/2015/05/28/one-primary-constructor.html
There Can Be Only One Primary Constructor
- Mountain View, CA
- Yegor Bugayenko
- comments
- Translated:
- Chinese
- add yours!
I suggest classifying class constructors in OOP as primary and secondary. A primary constructor is the one that constructs an object and encapsulates other objects inside it. A secondary one is simply a preparation step before calling a primary constructor and is not really a constructor but rather an introductory layer in front of a real constructing mechanism.

Here is what I mean:
final class Cash {
private final int cents;
private final String currency;
public Cash() { // secondary
this(0);
}
public Cash(int cts) { // secondary
this(cts, "USD");
}
public Cash(int cts, String crn) { // primary
this.cents = cts;
this.currency = crn;
}
// methods here
}There are three constructors in the class—only one is primary and the other two are secondary. My definition of a secondary constructor is simple: It doesn’t do anything besides calling a primary constructor, through this(..).
My point here is that a properly designed class must have only one primary constructor, and it should be declared after all secondary ones. Why? There is only one reason behind this rule: It helps eliminate code duplication.
Without such a rule, we may have this design for our class:
final class Cash {
private final int cents;
private final String currency;
public Cash() { // primary
this.cents = 0;
this.currency = "USD";
}
public Cash(int cts) { // primary
this.cents = cts;
this.currency = "USD";
}
public Cash(int cts, String crn) { // primary
this.cents = cts;
this.currency = crn;
}
// methods here
}There’s not a lot of code here, but the duplication is massive and ugly; I hope you see it for yourself.
By strictly following this suggested rule, all classes will have a single entry point (point of construction), which is a primary constructor, and it will always be easy to find because it stays below all secondary constructors.
More about this subject in Elegant Objects, Section 1.2.
I suggest classifying class constructors in OOP as primary and secondary. A primary constructor is the one that constructs an object and encapsulates other objects inside it. A secondary one is simply a preparation step before calling a primary constructor and is not really a constructor but rather an introductory layer in front of a real constructing mechanism.

Here is what I mean:
final class Cash {
private final int cents;
private final String currency;
public Cash() { // secondary
this(0);
}
public Cash(int cts) { // secondary
this(cts, "USD");
}
public Cash(int cts, String crn) { // primary
this.cents = cts;
this.currency = crn;
}
// methods here
}There are three constructors in the class—only one is primary and the other two are secondary. My definition of a secondary constructor is simple: It doesn’t do anything besides calling a primary constructor, through this(..).
My point here is that a properly designed class must have only one primary constructor, and it should be declared after all secondary ones. Why? There is only one reason behind this rule: It helps eliminate code duplication.
Without such a rule, we may have this design for our class:
final class Cash {
private final int cents;
private final String currency;
public Cash() { // primary
this.cents = 0;
this.currency = "USD";
}
public Cash(int cts) { // primary
this.cents = cts;
this.currency = "USD";
}
public Cash(int cts, String crn) { // primary
this.cents = cts;
this.currency = crn;
}
// methods here
}There’s not a lot of code here, but the duplication is massive and ugly; I hope you see it for yourself.
By strictly following this suggested rule, all classes will have a single entry point (point of construction), which is a primary constructor, and it will always be easy to find because it stays below all secondary constructors.
More about this subject in Elegant Objects, Section 1.2.
Please, use syntax highlighting in your comments, to make them more readable.

JUnit officially suggests a test fixture:
public final class MetricsTest {
private File temp;
private Folder folder;
@Before
public void prepare() {
this.temp = Files.createTempDirectory("test");
this.folder = new DiscFolder(this.temp);
this.folder.save("first.txt", "Hello, world!");
this.folder.save("second.txt", "Goodbye!");
}
@After
public void clean() {
FileUtils.deleteDirectory(this.temp);
}
@Test
public void calculatesTotalSize() {
assertEquals(22, new Metrics(this.folder).size());
}
@Test
public void countsWordsInFiles() {
assertEquals(4, new Metrics(this.folder).wc());
}
}I think it’s obvious what this test is doing. First, in prepare(), it creates a “test fixture” of type Folder. That is used in all three tests as an argument for the Metrics constructor. The real class being tested here is Metrics while this.folder is something we need in order to test it.
What’s wrong with this test? There is one serious issue: coupling between test methods. Test methods (and all tests in general) must be perfectly isolated from each other. This means that changing one test must not affect any others. In this example, that is not the case. When I want to change the countsWords() test, I have to change the internals of before(), which will affect the other method in the test “class.”
With all due respect to JUnit, the idea of creating test fixtures in @Before and @After is wrong, mostly because it encourages developers to couple test methods.
Here is how we can improve our test and isolate test methods:
public final class MetricsTest {
@Test
public void calculatesTotalSize() {
final File dir = Files.createTempDirectory("test-1");
final Folder folder = MetricsTest.folder(
dir,
"first.txt:Hello, world!",
"second.txt:Goodbye!"
);
try {
assertEquals(22, new Metrics(folder).size());
} finally {
FileUtils.deleteDirectory(dir);
}
}
@Test
public void countsWordsInFiles() {
final File dir = Files.createTempDirectory("test-2");
final Folder folder = MetricsTest.folder(
dir,
"alpha.txt:Three words here",
"beta.txt:two words"
"gamma.txt:one!"
);
try {
assertEquals(6, new Metrics(folder).wc());
} finally {
FileUtils.deleteDirectory(dir);
}
}
private static Folder folder(File dir, String... parts) {
Folder folder = new DiscFolder(dir);
for (final String part : parts) {
final String[] pair = part.split(":", 2);
this.folder.save(pair[0], pair[1]);
}
return folder;
}
}Does it look better now? We’re not there yet, but now our test methods are perfectly isolated. If I want to change one of them, I’m not going to affect the others because I pass all configuration parameters to a private static utility (!) method folder().
A utility method, huh? Yes, it smells.
The main issue with this design, even though it is way better than the previous one, is that it doesn’t prevent code duplication between test “classes.” If I need a similar test fixture of type Folder in another test case, I will have to move this static method there. Or even worse, I will have to create a utility class. Yes, there is nothing worse in object-oriented programming than utility classes.
A much better design would be to use “fake” objects instead of private static utilities. Here is how. First, we create a fake class and place it into src/main/java. This class can be used in tests and also in production code, if necessary (Fk for “fake”):
public final class FkFolder implements Folder, Closeable {
private final File dir;
private final String[] parts;
public FkFolder(String... prts) {
this(Files.createTempDirectory("test-1"), parts);
}
public FkFolder(File file, String... prts) {
this.dir = file;
this.parts = parts;
}
@Override
public Iterable<File> files() {
final Folder folder = new DiscFolder(this.dir);
for (final String part : this.parts) {
final String[] pair = part.split(":", 2);
folder.save(pair[0], pair[1]);
}
return folder.files();
}
@Override
public void close() {
FileUtils.deleteDirectory(this.dir);
}
}Here is how our test will look now:
public final class MetricsTest {
@Test
public void calculatesTotalSize() {
final String[] parts = {
"first.txt:Hello, world!",
"second.txt:Goodbye!"
};
try (final Folder folder = new FkFolder(parts)) {
assertEquals(22, new Metrics(folder).size());
}
}
@Test
public void countsWordsInFiles() {
final String[] parts = {
"alpha.txt:Three words here",
"beta.txt:two words"
"gamma.txt:one!"
};
try (final Folder folder = new FkFolder(parts)) {
assertEquals(6, new Metrics(folder).wc());
}
}
}What do you think? Isn’t it better than what JUnit offers? Isn’t it more reusable and extensible than utility methods?
To summarize, I believe scaffolding in unit testing must be done through fake objects that are shipped together with production code.
" /> private static methods. They both look anti-OOP to me, and I think there is an alternative. Let me explain.
JUnit officially suggests a test fixture:
public final class MetricsTest {
private File temp;
private Folder folder;
@Before
public void prepare() {
this.temp = Files.createTempDirectory("test");
this.folder = new DiscFolder(this.temp);
this.folder.save("first.txt", "Hello, world!");
this.folder.save("second.txt", "Goodbye!");
}
@After
public void clean() {
FileUtils.deleteDirectory(this.temp);
}
@Test
public void calculatesTotalSize() {
assertEquals(22, new Metrics(this.folder).size());
}
@Test
public void countsWordsInFiles() {
assertEquals(4, new Metrics(this.folder).wc());
}
}I think it’s obvious what this test is doing. First, in prepare(), it creates a “test fixture” of type Folder. That is used in all three tests as an argument for the Metrics constructor. The real class being tested here is Metrics while this.folder is something we need in order to test it.
What’s wrong with this test? There is one serious issue: coupling between test methods. Test methods (and all tests in general) must be perfectly isolated from each other. This means that changing one test must not affect any others. In this example, that is not the case. When I want to change the countsWords() test, I have to change the internals of before(), which will affect the other method in the test “class.”
With all due respect to JUnit, the idea of creating test fixtures in @Before and @After is wrong, mostly because it encourages developers to couple test methods.
Here is how we can improve our test and isolate test methods:
public final class MetricsTest {
@Test
public void calculatesTotalSize() {
final File dir = Files.createTempDirectory("test-1");
final Folder folder = MetricsTest.folder(
dir,
"first.txt:Hello, world!",
"second.txt:Goodbye!"
);
try {
assertEquals(22, new Metrics(folder).size());
} finally {
FileUtils.deleteDirectory(dir);
}
}
@Test
public void countsWordsInFiles() {
final File dir = Files.createTempDirectory("test-2");
final Folder folder = MetricsTest.folder(
dir,
"alpha.txt:Three words here",
"beta.txt:two words"
"gamma.txt:one!"
);
try {
assertEquals(6, new Metrics(folder).wc());
} finally {
FileUtils.deleteDirectory(dir);
}
}
private static Folder folder(File dir, String... parts) {
Folder folder = new DiscFolder(dir);
for (final String part : parts) {
final String[] pair = part.split(":", 2);
this.folder.save(pair[0], pair[1]);
}
return folder;
}
}Does it look better now? We’re not there yet, but now our test methods are perfectly isolated. If I want to change one of them, I’m not going to affect the others because I pass all configuration parameters to a private static utility (!) method folder().
A utility method, huh? Yes, it smells.
The main issue with this design, even though it is way better than the previous one, is that it doesn’t prevent code duplication between test “classes.” If I need a similar test fixture of type Folder in another test case, I will have to move this static method there. Or even worse, I will have to create a utility class. Yes, there is nothing worse in object-oriented programming than utility classes.
A much better design would be to use “fake” objects instead of private static utilities. Here is how. First, we create a fake class and place it into src/main/java. This class can be used in tests and also in production code, if necessary (Fk for “fake”):
public final class FkFolder implements Folder, Closeable {
private final File dir;
private final String[] parts;
public FkFolder(String... prts) {
this(Files.createTempDirectory("test-1"), parts);
}
public FkFolder(File file, String... prts) {
this.dir = file;
this.parts = parts;
}
@Override
public Iterable<File> files() {
final Folder folder = new DiscFolder(this.dir);
for (final String part : this.parts) {
final String[] pair = part.split(":", 2);
folder.save(pair[0], pair[1]);
}
return folder.files();
}
@Override
public void close() {
FileUtils.deleteDirectory(this.dir);
}
}Here is how our test will look now:
public final class MetricsTest {
@Test
public void calculatesTotalSize() {
final String[] parts = {
"first.txt:Hello, world!",
"second.txt:Goodbye!"
};
try (final Folder folder = new FkFolder(parts)) {
assertEquals(22, new Metrics(folder).size());
}
}
@Test
public void countsWordsInFiles() {
final String[] parts = {
"alpha.txt:Three words here",
"beta.txt:two words"
"gamma.txt:one!"
};
try (final Folder folder = new FkFolder(parts)) {
assertEquals(6, new Metrics(folder).wc());
}
}
}What do you think? Isn’t it better than what JUnit offers? Isn’t it more reusable and extensible than utility methods?
To summarize, I believe scaffolding in unit testing must be done through fake objects that are shipped together with production code.
"/>
https://www.yegor256.com/2015/05/25/unit-test-scaffolding.html
A Few Thoughts on Unit Test Scaffolding
- Atherton, CA
- Yegor Bugayenko
- comments
When I start to repeat myself in unit test methods by creating the same objects and preparing the data to run the test, I feel disappointed in my design. Long test methods with a lot of code duplication just don’t look right. To simplify and shorten them, there are basically two options, at least in Java: 1) private properties initialized through @Before and @BeforeClass, and 2) private static methods. They both look anti-OOP to me, and I think there is an alternative. Let me explain.

JUnit officially suggests a test fixture:
public final class MetricsTest {
private File temp;
private Folder folder;
@Before
public void prepare() {
this.temp = Files.createTempDirectory("test");
this.folder = new DiscFolder(this.temp);
this.folder.save("first.txt", "Hello, world!");
this.folder.save("second.txt", "Goodbye!");
}
@After
public void clean() {
FileUtils.deleteDirectory(this.temp);
}
@Test
public void calculatesTotalSize() {
assertEquals(22, new Metrics(this.folder).size());
}
@Test
public void countsWordsInFiles() {
assertEquals(4, new Metrics(this.folder).wc());
}
}I think it’s obvious what this test is doing. First, in prepare(), it creates a “test fixture” of type Folder. That is used in all three tests as an argument for the Metrics constructor. The real class being tested here is Metrics while this.folder is something we need in order to test it.
What’s wrong with this test? There is one serious issue: coupling between test methods. Test methods (and all tests in general) must be perfectly isolated from each other. This means that changing one test must not affect any others. In this example, that is not the case. When I want to change the countsWords() test, I have to change the internals of before(), which will affect the other method in the test “class.”
With all due respect to JUnit, the idea of creating test fixtures in @Before and @After is wrong, mostly because it encourages developers to couple test methods.
Here is how we can improve our test and isolate test methods:
public final class MetricsTest {
@Test
public void calculatesTotalSize() {
final File dir = Files.createTempDirectory("test-1");
final Folder folder = MetricsTest.folder(
dir,
"first.txt:Hello, world!",
"second.txt:Goodbye!"
);
try {
assertEquals(22, new Metrics(folder).size());
} finally {
FileUtils.deleteDirectory(dir);
}
}
@Test
public void countsWordsInFiles() {
final File dir = Files.createTempDirectory("test-2");
final Folder folder = MetricsTest.folder(
dir,
"alpha.txt:Three words here",
"beta.txt:two words"
"gamma.txt:one!"
);
try {
assertEquals(6, new Metrics(folder).wc());
} finally {
FileUtils.deleteDirectory(dir);
}
}
private static Folder folder(File dir, String... parts) {
Folder folder = new DiscFolder(dir);
for (final String part : parts) {
final String[] pair = part.split(":", 2);
this.folder.save(pair[0], pair[1]);
}
return folder;
}
}Does it look better now? We’re not there yet, but now our test methods are perfectly isolated. If I want to change one of them, I’m not going to affect the others because I pass all configuration parameters to a private static utility (!) method folder().
A utility method, huh? Yes, it smells.
The main issue with this design, even though it is way better than the previous one, is that it doesn’t prevent code duplication between test “classes.” If I need a similar test fixture of type Folder in another test case, I will have to move this static method there. Or even worse, I will have to create a utility class. Yes, there is nothing worse in object-oriented programming than utility classes.
A much better design would be to use “fake” objects instead of private static utilities. Here is how. First, we create a fake class and place it into src/main/java. This class can be used in tests and also in production code, if necessary (Fk for “fake”):
public final class FkFolder implements Folder, Closeable {
private final File dir;
private final String[] parts;
public FkFolder(String... prts) {
this(Files.createTempDirectory("test-1"), parts);
}
public FkFolder(File file, String... prts) {
this.dir = file;
this.parts = parts;
}
@Override
public Iterable<File> files() {
final Folder folder = new DiscFolder(this.dir);
for (final String part : this.parts) {
final String[] pair = part.split(":", 2);
folder.save(pair[0], pair[1]);
}
return folder.files();
}
@Override
public void close() {
FileUtils.deleteDirectory(this.dir);
}
}Here is how our test will look now:
public final class MetricsTest {
@Test
public void calculatesTotalSize() {
final String[] parts = {
"first.txt:Hello, world!",
"second.txt:Goodbye!"
};
try (final Folder folder = new FkFolder(parts)) {
assertEquals(22, new Metrics(folder).size());
}
}
@Test
public void countsWordsInFiles() {
final String[] parts = {
"alpha.txt:Three words here",
"beta.txt:two words"
"gamma.txt:one!"
};
try (final Folder folder = new FkFolder(parts)) {
assertEquals(6, new Metrics(folder).wc());
}
}
}What do you think? Isn’t it better than what JUnit offers? Isn’t it more reusable and extensible than utility methods?
To summarize, I believe scaffolding in unit testing must be done through fake objects that are shipped together with production code.
When I start to repeat myself in unit test methods by creating the same objects and preparing the data to run the test, I feel disappointed in my design. Long test methods with a lot of code duplication just don’t look right. To simplify and shorten them, there are basically two options, at least in Java: 1) private properties initialized through @Before and @BeforeClass, and 2) private static methods. They both look anti-OOP to me, and I think there is an alternative. Let me explain.

JUnit officially suggests a test fixture:
public final class MetricsTest {
private File temp;
private Folder folder;
@Before
public void prepare() {
this.temp = Files.createTempDirectory("test");
this.folder = new DiscFolder(this.temp);
this.folder.save("first.txt", "Hello, world!");
this.folder.save("second.txt", "Goodbye!");
}
@After
public void clean() {
FileUtils.deleteDirectory(this.temp);
}
@Test
public void calculatesTotalSize() {
assertEquals(22, new Metrics(this.folder).size());
}
@Test
public void countsWordsInFiles() {
assertEquals(4, new Metrics(this.folder).wc());
}
}I think it’s obvious what this test is doing. First, in prepare(), it creates a “test fixture” of type Folder. That is used in all three tests as an argument for the Metrics constructor. The real class being tested here is Metrics while this.folder is something we need in order to test it.
What’s wrong with this test? There is one serious issue: coupling between test methods. Test methods (and all tests in general) must be perfectly isolated from each other. This means that changing one test must not affect any others. In this example, that is not the case. When I want to change the countsWords() test, I have to change the internals of before(), which will affect the other method in the test “class.”
With all due respect to JUnit, the idea of creating test fixtures in @Before and @After is wrong, mostly because it encourages developers to couple test methods.
Here is how we can improve our test and isolate test methods:
public final class MetricsTest {
@Test
public void calculatesTotalSize() {
final File dir = Files.createTempDirectory("test-1");
final Folder folder = MetricsTest.folder(
dir,
"first.txt:Hello, world!",
"second.txt:Goodbye!"
);
try {
assertEquals(22, new Metrics(folder).size());
} finally {
FileUtils.deleteDirectory(dir);
}
}
@Test
public void countsWordsInFiles() {
final File dir = Files.createTempDirectory("test-2");
final Folder folder = MetricsTest.folder(
dir,
"alpha.txt:Three words here",
"beta.txt:two words"
"gamma.txt:one!"
);
try {
assertEquals(6, new Metrics(folder).wc());
} finally {
FileUtils.deleteDirectory(dir);
}
}
private static Folder folder(File dir, String... parts) {
Folder folder = new DiscFolder(dir);
for (final String part : parts) {
final String[] pair = part.split(":", 2);
this.folder.save(pair[0], pair[1]);
}
return folder;
}
}Does it look better now? We’re not there yet, but now our test methods are perfectly isolated. If I want to change one of them, I’m not going to affect the others because I pass all configuration parameters to a private static utility (!) method folder().
A utility method, huh? Yes, it smells.
The main issue with this design, even though it is way better than the previous one, is that it doesn’t prevent code duplication between test “classes.” If I need a similar test fixture of type Folder in another test case, I will have to move this static method there. Or even worse, I will have to create a utility class. Yes, there is nothing worse in object-oriented programming than utility classes.
A much better design would be to use “fake” objects instead of private static utilities. Here is how. First, we create a fake class and place it into src/main/java. This class can be used in tests and also in production code, if necessary (Fk for “fake”):
public final class FkFolder implements Folder, Closeable {
private final File dir;
private final String[] parts;
public FkFolder(String... prts) {
this(Files.createTempDirectory("test-1"), parts);
}
public FkFolder(File file, String... prts) {
this.dir = file;
this.parts = parts;
}
@Override
public Iterable<File> files() {
final Folder folder = new DiscFolder(this.dir);
for (final String part : this.parts) {
final String[] pair = part.split(":", 2);
folder.save(pair[0], pair[1]);
}
return folder.files();
}
@Override
public void close() {
FileUtils.deleteDirectory(this.dir);
}
}Here is how our test will look now:
public final class MetricsTest {
@Test
public void calculatesTotalSize() {
final String[] parts = {
"first.txt:Hello, world!",
"second.txt:Goodbye!"
};
try (final Folder folder = new FkFolder(parts)) {
assertEquals(22, new Metrics(folder).size());
}
}
@Test
public void countsWordsInFiles() {
final String[] parts = {
"alpha.txt:Three words here",
"beta.txt:two words"
"gamma.txt:one!"
};
try (final Folder folder = new FkFolder(parts)) {
assertEquals(6, new Metrics(folder).wc());
}
}
}What do you think? Isn’t it better than what JUnit offers? Isn’t it more reusable and extensible than utility methods?
To summarize, I believe scaffolding in unit testing must be done through fake objects that are shipped together with production code.
Please, use syntax highlighting in your comments, to make them more readable.

Let’s say we’re making an interface that would represent a name of a person:
interface Name {
String first();
}Pretty easy, right? Now, let’s try to implement it:
public final class EnglishName implements Name {
private final String name;
public EnglishName(final CharSequence text) {
this.name = text.toString().split(" ", 2)[0];
}
@Override
public String first() {
return this.name;
}
}What’s wrong with this? It’s faster, right? It splits the name into parts only once and encapsulates them. Then, no matter how many times we call the first() method, it will return the same value and won’t need to do the splitting again. However, this is flawed thinking! Let me show you the right way and explain:
public final class EnglishName implements Name {
private final CharSequence text;
public EnglishName(final CharSequence txt) {
this.text = txt;
}
@Override
public String first() {
return this.text.toString().split("", 2)[0];
}
}This is the right design. I can see you smiling, so let me prove my point.
Before I start proving, though, let me ask you to read this article: Composable Decorators vs. Imperative Utility Methods. It explains the difference between a static method and composable decorators. The first snippet above is very close to an imperative utility method, even though it looks like an object. The second example is a true object.
In the first example, we are abusing the new operator and turning it into a static method, which does all calculations for us right here and now. This is what imperative programming is about. In imperative programming, we do all calculations right now and return fully ready results. In declarative programming, we are instead trying to delay calculations for as long as possible.
Let’s try to use our EnglishName class:
final Name name = new EnglishName(
new NameInPostgreSQL(/*...*/)
);
if (/* something goes wrong */) {
throw new IllegalStateException(
String.format(
"Hi, %s, we can't proceed with your application",
name.first()
)
);
}In the first line of this snippet, we are just making an instance of an object and labeling it name. We don’t want to go to the database yet and fetch the full name from there, split it into parts, and encapsulate them inside name. We just want to create an instance of an object. Such a parsing behavior would be a side effect for us and, in this case, will slow down the application. As you see, we may only need name.first() if something goes wrong and we need to construct an exception object.
My point is that having any computations done inside a constructor is a bad practice and must be avoided because they are side effects and are not requested by the object owner.
What about performance during the re-use of name, you may ask. If we make an instance of EnglishName and then call name.first() five times, we’ll end up with five calls to the String.split() method.
To solve that, we create another class, a composable decorator, which will help us solve this “re-use” problem:
public final class CachedName implements Name {
private final Name origin;
public CachedName(final Name name) {
this.origin = name;
}
@Override
@Cacheable(forever = true)
public String first() {
return this.origin.first();
}
}I’m using the Cacheable annotation from jcabi-aspects, but you can use any other caching tools available in Java (or other languages), like Guava Cache:
public final class CachedName implements Name {
private final Cache<Long, String> cache =
CacheBuilder.newBuilder().build();
private final Name origin;
public CachedName(final Name name) {
this.origin = name;
}
@Override
public String first() {
return this.cache.get(
1L,
new Callable<String>() {
@Override
public String call() {
return CachedName.this.origin.first();
}
}
);
}
}But please don’t make CachedName mutable and lazily loaded—it’s an anti-pattern, which I’ve discussed before in Objects Should Be Immutable.
This is how our code will look now:
final Name name = new CachedName(
new EnglishName(
new NameInPostgreSQL(/*...*/)
)
);It’s a very primitive example, but I hope you get the idea.
In this design, we’re basically splitting the object into two parts. The first one knows how to get the first name from the English name. The second one knows how to cache the results of this calculation in memory. And now it’s my decision, as a user of these classes, how exactly to use them. I will decide whether I need caching or not. This is what object composition is all about.
Let me reiterate that the only allowed statement inside a constructor is an assignment. If you need to put something else there, start thinking about refactoring—your class definitely needs a redesign.
" />
Let’s say we’re making an interface that would represent a name of a person:
interface Name {
String first();
}Pretty easy, right? Now, let’s try to implement it:
public final class EnglishName implements Name {
private final String name;
public EnglishName(final CharSequence text) {
this.name = text.toString().split(" ", 2)[0];
}
@Override
public String first() {
return this.name;
}
}What’s wrong with this? It’s faster, right? It splits the name into parts only once and encapsulates them. Then, no matter how many times we call the first() method, it will return the same value and won’t need to do the splitting again. However, this is flawed thinking! Let me show you the right way and explain:
public final class EnglishName implements Name {
private final CharSequence text;
public EnglishName(final CharSequence txt) {
this.text = txt;
}
@Override
public String first() {
return this.text.toString().split("", 2)[0];
}
}This is the right design. I can see you smiling, so let me prove my point.
Before I start proving, though, let me ask you to read this article: Composable Decorators vs. Imperative Utility Methods. It explains the difference between a static method and composable decorators. The first snippet above is very close to an imperative utility method, even though it looks like an object. The second example is a true object.
In the first example, we are abusing the new operator and turning it into a static method, which does all calculations for us right here and now. This is what imperative programming is about. In imperative programming, we do all calculations right now and return fully ready results. In declarative programming, we are instead trying to delay calculations for as long as possible.
Let’s try to use our EnglishName class:
final Name name = new EnglishName(
new NameInPostgreSQL(/*...*/)
);
if (/* something goes wrong */) {
throw new IllegalStateException(
String.format(
"Hi, %s, we can't proceed with your application",
name.first()
)
);
}In the first line of this snippet, we are just making an instance of an object and labeling it name. We don’t want to go to the database yet and fetch the full name from there, split it into parts, and encapsulate them inside name. We just want to create an instance of an object. Such a parsing behavior would be a side effect for us and, in this case, will slow down the application. As you see, we may only need name.first() if something goes wrong and we need to construct an exception object.
My point is that having any computations done inside a constructor is a bad practice and must be avoided because they are side effects and are not requested by the object owner.
What about performance during the re-use of name, you may ask. If we make an instance of EnglishName and then call name.first() five times, we’ll end up with five calls to the String.split() method.
To solve that, we create another class, a composable decorator, which will help us solve this “re-use” problem:
public final class CachedName implements Name {
private final Name origin;
public CachedName(final Name name) {
this.origin = name;
}
@Override
@Cacheable(forever = true)
public String first() {
return this.origin.first();
}
}I’m using the Cacheable annotation from jcabi-aspects, but you can use any other caching tools available in Java (or other languages), like Guava Cache:
public final class CachedName implements Name {
private final Cache<Long, String> cache =
CacheBuilder.newBuilder().build();
private final Name origin;
public CachedName(final Name name) {
this.origin = name;
}
@Override
public String first() {
return this.cache.get(
1L,
new Callable<String>() {
@Override
public String call() {
return CachedName.this.origin.first();
}
}
);
}
}But please don’t make CachedName mutable and lazily loaded—it’s an anti-pattern, which I’ve discussed before in Objects Should Be Immutable.
This is how our code will look now:
final Name name = new CachedName(
new EnglishName(
new NameInPostgreSQL(/*...*/)
)
);It’s a very primitive example, but I hope you get the idea.
In this design, we’re basically splitting the object into two parts. The first one knows how to get the first name from the English name. The second one knows how to cache the results of this calculation in memory. And now it’s my decision, as a user of these classes, how exactly to use them. I will decide whether I need caching or not. This is what object composition is all about.
Let me reiterate that the only allowed statement inside a constructor is an assignment. If you need to put something else there, start thinking about refactoring—your class definitely needs a redesign.
"/>
https://www.yegor256.com/2015/05/07/ctors-must-be-code-free.html
Constructors Must Be Code-Free
- Yegor Bugayenko
- comments
How much work should be done within a constructor? It seems reasonable to do some computations inside a constructor and then encapsulate results. That way, when the results are required by object methods, we’ll have them ready. Sounds like a good approach? No, it’s not. It’s a bad idea for one reason: It prevents composition of objects and makes them un-extensible.

Let’s say we’re making an interface that would represent a name of a person:
interface Name {
String first();
}Pretty easy, right? Now, let’s try to implement it:
public final class EnglishName implements Name {
private final String name;
public EnglishName(final CharSequence text) {
this.name = text.toString().split(" ", 2)[0];
}
@Override
public String first() {
return this.name;
}
}What’s wrong with this? It’s faster, right? It splits the name into parts only once and encapsulates them. Then, no matter how many times we call the first() method, it will return the same value and won’t need to do the splitting again. However, this is flawed thinking! Let me show you the right way and explain:
public final class EnglishName implements Name {
private final CharSequence text;
public EnglishName(final CharSequence txt) {
this.text = txt;
}
@Override
public String first() {
return this.text.toString().split("", 2)[0];
}
}This is the right design. I can see you smiling, so let me prove my point.
Before I start proving, though, let me ask you to read this article: Composable Decorators vs. Imperative Utility Methods. It explains the difference between a static method and composable decorators. The first snippet above is very close to an imperative utility method, even though it looks like an object. The second example is a true object.
In the first example, we are abusing the new operator and turning it into a static method, which does all calculations for us right here and now. This is what imperative programming is about. In imperative programming, we do all calculations right now and return fully ready results. In declarative programming, we are instead trying to delay calculations for as long as possible.
Let’s try to use our EnglishName class:
final Name name = new EnglishName(
new NameInPostgreSQL(/*...*/)
);
if (/* something goes wrong */) {
throw new IllegalStateException(
String.format(
"Hi, %s, we can't proceed with your application",
name.first()
)
);
}In the first line of this snippet, we are just making an instance of an object and labeling it name. We don’t want to go to the database yet and fetch the full name from there, split it into parts, and encapsulate them inside name. We just want to create an instance of an object. Such a parsing behavior would be a side effect for us and, in this case, will slow down the application. As you see, we may only need name.first() if something goes wrong and we need to construct an exception object.
My point is that having any computations done inside a constructor is a bad practice and must be avoided because they are side effects and are not requested by the object owner.
What about performance during the re-use of name, you may ask. If we make an instance of EnglishName and then call name.first() five times, we’ll end up with five calls to the String.split() method.
To solve that, we create another class, a composable decorator, which will help us solve this “re-use” problem:
public final class CachedName implements Name {
private final Name origin;
public CachedName(final Name name) {
this.origin = name;
}
@Override
@Cacheable(forever = true)
public String first() {
return this.origin.first();
}
}I’m using the Cacheable annotation from jcabi-aspects, but you can use any other caching tools available in Java (or other languages), like Guava Cache:
public final class CachedName implements Name {
private final Cache<Long, String> cache =
CacheBuilder.newBuilder().build();
private final Name origin;
public CachedName(final Name name) {
this.origin = name;
}
@Override
public String first() {
return this.cache.get(
1L,
new Callable<String>() {
@Override
public String call() {
return CachedName.this.origin.first();
}
}
);
}
}But please don’t make CachedName mutable and lazily loaded—it’s an anti-pattern, which I’ve discussed before in Objects Should Be Immutable.
This is how our code will look now:
final Name name = new CachedName(
new EnglishName(
new NameInPostgreSQL(/*...*/)
)
);It’s a very primitive example, but I hope you get the idea.
In this design, we’re basically splitting the object into two parts. The first one knows how to get the first name from the English name. The second one knows how to cache the results of this calculation in memory. And now it’s my decision, as a user of these classes, how exactly to use them. I will decide whether I need caching or not. This is what object composition is all about.
Let me reiterate that the only allowed statement inside a constructor is an assignment. If you need to put something else there, start thinking about refactoring—your class definitely needs a redesign.
How much work should be done within a constructor? It seems reasonable to do some computations inside a constructor and then encapsulate results. That way, when the results are required by object methods, we’ll have them ready. Sounds like a good approach? No, it’s not. It’s a bad idea for one reason: It prevents composition of objects and makes them un-extensible.

Let’s say we’re making an interface that would represent a name of a person:
interface Name {
String first();
}Pretty easy, right? Now, let’s try to implement it:
public final class EnglishName implements Name {
private final String name;
public EnglishName(final CharSequence text) {
this.name = text.toString().split(" ", 2)[0];
}
@Override
public String first() {
return this.name;
}
}What’s wrong with this? It’s faster, right? It splits the name into parts only once and encapsulates them. Then, no matter how many times we call the first() method, it will return the same value and won’t need to do the splitting again. However, this is flawed thinking! Let me show you the right way and explain:
public final class EnglishName implements Name {
private final CharSequence text;
public EnglishName(final CharSequence txt) {
this.text = txt;
}
@Override
public String first() {
return this.text.toString().split("", 2)[0];
}
}This is the right design. I can see you smiling, so let me prove my point.
Before I start proving, though, let me ask you to read this article: Composable Decorators vs. Imperative Utility Methods. It explains the difference between a static method and composable decorators. The first snippet above is very close to an imperative utility method, even though it looks like an object. The second example is a true object.
In the first example, we are abusing the new operator and turning it into a static method, which does all calculations for us right here and now. This is what imperative programming is about. In imperative programming, we do all calculations right now and return fully ready results. In declarative programming, we are instead trying to delay calculations for as long as possible.
Let’s try to use our EnglishName class:
final Name name = new EnglishName(
new NameInPostgreSQL(/*...*/)
);
if (/* something goes wrong */) {
throw new IllegalStateException(
String.format(
"Hi, %s, we can't proceed with your application",
name.first()
)
);
}In the first line of this snippet, we are just making an instance of an object and labeling it name. We don’t want to go to the database yet and fetch the full name from there, split it into parts, and encapsulate them inside name. We just want to create an instance of an object. Such a parsing behavior would be a side effect for us and, in this case, will slow down the application. As you see, we may only need name.first() if something goes wrong and we need to construct an exception object.
My point is that having any computations done inside a constructor is a bad practice and must be avoided because they are side effects and are not requested by the object owner.
What about performance during the re-use of name, you may ask. If we make an instance of EnglishName and then call name.first() five times, we’ll end up with five calls to the String.split() method.
To solve that, we create another class, a composable decorator, which will help us solve this “re-use” problem:
public final class CachedName implements Name {
private final Name origin;
public CachedName(final Name name) {
this.origin = name;
}
@Override
@Cacheable(forever = true)
public String first() {
return this.origin.first();
}
}I’m using the Cacheable annotation from jcabi-aspects, but you can use any other caching tools available in Java (or other languages), like Guava Cache:
public final class CachedName implements Name {
private final Cache<Long, String> cache =
CacheBuilder.newBuilder().build();
private final Name origin;
public CachedName(final Name name) {
this.origin = name;
}
@Override
public String first() {
return this.cache.get(
1L,
new Callable<String>() {
@Override
public String call() {
return CachedName.this.origin.first();
}
}
);
}
}But please don’t make CachedName mutable and lazily loaded—it’s an anti-pattern, which I’ve discussed before in Objects Should Be Immutable.
This is how our code will look now:
final Name name = new CachedName(
new EnglishName(
new NameInPostgreSQL(/*...*/)
)
);It’s a very primitive example, but I hope you get the idea.
In this design, we’re basically splitting the object into two parts. The first one knows how to get the first name from the English name. The second one knows how to cache the results of this calculation in memory. And now it’s my decision, as a user of these classes, how exactly to use them. I will decide whether I need caching or not. This is what object composition is all about.
Let me reiterate that the only allowed statement inside a constructor is an assignment. If you need to put something else there, start thinking about refactoring—your class definitely needs a redesign.
Please, use syntax highlighting in your comments, to make them more readable.

This is a very typical example of type casting (Google Guava is full of it, for example Iterables.size()):
public final class Foo {
public int sizeOf(Iterable items) {
int size = 0;
if (items instanceof Collection) {
size = Collection.class.cast(items).size();
} else {
for (Object item : items) {
++size;
}
}
return size;
}
}This sizeOf() method calculates the size of an iterable. However, it is smart enough to understand that if items are also instances of Collection, there is no need to actually iterate them. It would be much faster to cast them to Collection and then call method size(). Looks logical, but what’s wrong with this approach? I see two practical problems.
First, there is a hidden coupling of sizeOf() and Collection. This coupling is not visible to the clients of sizeOf(). They don’t know that method sizeOf() relies on interface Collection. If tomorrow we decide to change it, sizeOf() won’t work. And we’ll be very surprised, since its signature says nothing about this dependency. This won’t happen with Collection, obviously, since it is part of the Java SDK, but with custom classes, this may and will happen.
The second problem is an inevitably growing complexity of method sizeOf(). The more special types it has to treat differently, the more complex it will become. This if/then forking is inevitable, since it has to check all possible types and give them special treatment. Such complexity is a result of a violation of the single responsibility principle. The method is not only calculating the size of Iterable but is also performing type casting and forking based on that casting.
What is the alternative? There are a few, but the most obvious is method overloading (not available in semi-OOP languages like Ruby or PHP):
public final class Foo {
public int sizeOf(Iterable items) {
int size = 0;
for (Object item : items) {
++size;
}
return size;
}
public int sizeOf(Collection items) {
return items.size();
}
}Isn’t that more elegant?
Philosophically speaking, type casting is discrimination against the object that comes into the method. The object complies with the contract provided by the method signature. It implements the Iterable interface, which is a contract, and it expects equal treatment with all other objects that come into the same method. But the method discriminates objects by their types. The method is basically asking the object about its… race. Black objects go right while white objects go left. That’s what this instanceof is doing, and that’s what discrimination is all about.
By using instanceof, the method is segregating incoming objects by the certain group they belong to. In this case, there are two groups: collections and everybody else. If you are a collection, you get special treatment. Even though you abide by the Iterable contract, we still treat some objects specially because they belong to an “elite” group called Collection.
You may say that Collection is just another contract that an object may comply with. That’s true, but in this case, there should be another door through which those who work by that contract should enter. You announced that sizeOf() accepts everybody who works on the Iterable contract. I am an object, and I do what the contract says. I enter the method and expect equal treatment with everybody else who comes into the same method. But, apparently, once inside the method, I realize that some objects have some special privileges. Isn’t that discrimination?
To conclude, I would consider instanceof and class casting to be anti-patterns and code smells. Once you see a need to use them, start thinking about refactoring.

This is a very typical example of type casting (Google Guava is full of it, for example Iterables.size()):
public final class Foo {
public int sizeOf(Iterable items) {
int size = 0;
if (items instanceof Collection) {
size = Collection.class.cast(items).size();
} else {
for (Object item : items) {
++size;
}
}
return size;
}
}This sizeOf() method calculates the size of an iterable. However, it is smart enough to understand that if items are also instances of Collection, there is no need to actually iterate them. It would be much faster to cast them to Collection and then call method size(). Looks logical, but what’s wrong with this approach? I see two practical problems.
First, there is a hidden coupling of sizeOf() and Collection. This coupling is not visible to the clients of sizeOf(). They don’t know that method sizeOf() relies on interface Collection. If tomorrow we decide to change it, sizeOf() won’t work. And we’ll be very surprised, since its signature says nothing about this dependency. This won’t happen with Collection, obviously, since it is part of the Java SDK, but with custom classes, this may and will happen.
The second problem is an inevitably growing complexity of method sizeOf(). The more special types it has to treat differently, the more complex it will become. This if/then forking is inevitable, since it has to check all possible types and give them special treatment. Such complexity is a result of a violation of the single responsibility principle. The method is not only calculating the size of Iterable but is also performing type casting and forking based on that casting.
What is the alternative? There are a few, but the most obvious is method overloading (not available in semi-OOP languages like Ruby or PHP):
public final class Foo {
public int sizeOf(Iterable items) {
int size = 0;
for (Object item : items) {
++size;
}
return size;
}
public int sizeOf(Collection items) {
return items.size();
}
}Isn’t that more elegant?
Philosophically speaking, type casting is discrimination against the object that comes into the method. The object complies with the contract provided by the method signature. It implements the Iterable interface, which is a contract, and it expects equal treatment with all other objects that come into the same method. But the method discriminates objects by their types. The method is basically asking the object about its… race. Black objects go right while white objects go left. That’s what this instanceof is doing, and that’s what discrimination is all about.
By using instanceof, the method is segregating incoming objects by the certain group they belong to. In this case, there are two groups: collections and everybody else. If you are a collection, you get special treatment. Even though you abide by the Iterable contract, we still treat some objects specially because they belong to an “elite” group called Collection.
You may say that Collection is just another contract that an object may comply with. That’s true, but in this case, there should be another door through which those who work by that contract should enter. You announced that sizeOf() accepts everybody who works on the Iterable contract. I am an object, and I do what the contract says. I enter the method and expect equal treatment with everybody else who comes into the same method. But, apparently, once inside the method, I realize that some objects have some special privileges. Isn’t that discrimination?
To conclude, I would consider instanceof and class casting to be anti-patterns and code smells. Once you see a need to use them, start thinking about refactoring.
https://www.yegor256.com/2015/04/02/class-casting-is-anti-pattern.html
Class Casting Is a Discriminating Anti-Pattern
- Yegor Bugayenko
- comments
Type casting is a very useful technique when there is no time or desire to think and design objects properly. Type casting (or class casting) helps us work with provided objects differently, based on the class they belong to or the interface they implement. Class casting helps us discriminate against the poor objects and segregate them by their race, gender, and religion. Can this be a good practice?

This is a very typical example of type casting (Google Guava is full of it, for example Iterables.size()):
public final class Foo {
public int sizeOf(Iterable items) {
int size = 0;
if (items instanceof Collection) {
size = Collection.class.cast(items).size();
} else {
for (Object item : items) {
++size;
}
}
return size;
}
}This sizeOf() method calculates the size of an iterable. However, it is smart enough to understand that if items are also instances of Collection, there is no need to actually iterate them. It would be much faster to cast them to Collection and then call method size(). Looks logical, but what’s wrong with this approach? I see two practical problems.
First, there is a hidden coupling of sizeOf() and Collection. This coupling is not visible to the clients of sizeOf(). They don’t know that method sizeOf() relies on interface Collection. If tomorrow we decide to change it, sizeOf() won’t work. And we’ll be very surprised, since its signature says nothing about this dependency. This won’t happen with Collection, obviously, since it is part of the Java SDK, but with custom classes, this may and will happen.
The second problem is an inevitably growing complexity of method sizeOf(). The more special types it has to treat differently, the more complex it will become. This if/then forking is inevitable, since it has to check all possible types and give them special treatment. Such complexity is a result of a violation of the single responsibility principle. The method is not only calculating the size of Iterable but is also performing type casting and forking based on that casting.
What is the alternative? There are a few, but the most obvious is method overloading (not available in semi-OOP languages like Ruby or PHP):
public final class Foo {
public int sizeOf(Iterable items) {
int size = 0;
for (Object item : items) {
++size;
}
return size;
}
public int sizeOf(Collection items) {
return items.size();
}
}Isn’t that more elegant?
Philosophically speaking, type casting is discrimination against the object that comes into the method. The object complies with the contract provided by the method signature. It implements the Iterable interface, which is a contract, and it expects equal treatment with all other objects that come into the same method. But the method discriminates objects by their types. The method is basically asking the object about its… race. Black objects go right while white objects go left. That’s what this instanceof is doing, and that’s what discrimination is all about.
By using instanceof, the method is segregating incoming objects by the certain group they belong to. In this case, there are two groups: collections and everybody else. If you are a collection, you get special treatment. Even though you abide by the Iterable contract, we still treat some objects specially because they belong to an “elite” group called Collection.
You may say that Collection is just another contract that an object may comply with. That’s true, but in this case, there should be another door through which those who work by that contract should enter. You announced that sizeOf() accepts everybody who works on the Iterable contract. I am an object, and I do what the contract says. I enter the method and expect equal treatment with everybody else who comes into the same method. But, apparently, once inside the method, I realize that some objects have some special privileges. Isn’t that discrimination?
To conclude, I would consider instanceof and class casting to be anti-patterns and code smells. Once you see a need to use them, start thinking about refactoring.
Type casting is a very useful technique when there is no time or desire to think and design objects properly. Type casting (or class casting) helps us work with provided objects differently, based on the class they belong to or the interface they implement. Class casting helps us discriminate against the poor objects and segregate them by their race, gender, and religion. Can this be a good practice?

This is a very typical example of type casting (Google Guava is full of it, for example Iterables.size()):
public final class Foo {
public int sizeOf(Iterable items) {
int size = 0;
if (items instanceof Collection) {
size = Collection.class.cast(items).size();
} else {
for (Object item : items) {
++size;
}
}
return size;
}
}This sizeOf() method calculates the size of an iterable. However, it is smart enough to understand that if items are also instances of Collection, there is no need to actually iterate them. It would be much faster to cast them to Collection and then call method size(). Looks logical, but what’s wrong with this approach? I see two practical problems.
First, there is a hidden coupling of sizeOf() and Collection. This coupling is not visible to the clients of sizeOf(). They don’t know that method sizeOf() relies on interface Collection. If tomorrow we decide to change it, sizeOf() won’t work. And we’ll be very surprised, since its signature says nothing about this dependency. This won’t happen with Collection, obviously, since it is part of the Java SDK, but with custom classes, this may and will happen.
The second problem is an inevitably growing complexity of method sizeOf(). The more special types it has to treat differently, the more complex it will become. This if/then forking is inevitable, since it has to check all possible types and give them special treatment. Such complexity is a result of a violation of the single responsibility principle. The method is not only calculating the size of Iterable but is also performing type casting and forking based on that casting.
What is the alternative? There are a few, but the most obvious is method overloading (not available in semi-OOP languages like Ruby or PHP):
public final class Foo {
public int sizeOf(Iterable items) {
int size = 0;
for (Object item : items) {
++size;
}
return size;
}
public int sizeOf(Collection items) {
return items.size();
}
}Isn’t that more elegant?
Philosophically speaking, type casting is discrimination against the object that comes into the method. The object complies with the contract provided by the method signature. It implements the Iterable interface, which is a contract, and it expects equal treatment with all other objects that come into the same method. But the method discriminates objects by their types. The method is basically asking the object about its… race. Black objects go right while white objects go left. That’s what this instanceof is doing, and that’s what discrimination is all about.
By using instanceof, the method is segregating incoming objects by the certain group they belong to. In this case, there are two groups: collections and everybody else. If you are a collection, you get special treatment. Even though you abide by the Iterable contract, we still treat some objects specially because they belong to an “elite” group called Collection.
You may say that Collection is just another contract that an object may comply with. That’s true, but in this case, there should be another door through which those who work by that contract should enter. You announced that sizeOf() accepts everybody who works on the Iterable contract. I am an object, and I do what the contract says. I enter the method and expect equal treatment with everybody else who comes into the same method. But, apparently, once inside the method, I realize that some objects have some special privileges. Isn’t that discrimination?
To conclude, I would consider instanceof and class casting to be anti-patterns and code smells. Once you see a need to use them, start thinking about refactoring.
Please, use syntax highlighting in your comments, to make them more readable.

Peter Coad used to say: Challenge any class name that ends in “-er.” There are a few good articles about this subject, including Your Coding Conventions Are Hurting You by Carlo Pescio, One of the Best Bits of Programming Advice I Ever Got by Travis Griggs, and Naming Objects – Don’t Use ER in Your Object Names by Ben Hall. The main argument against this “-er” suffix is that “when you need a manager, it’s often a sign that the managed are just plain old data structures and that the manager is the smart procedure doing the real work.”
I totally agree but would like to add a few words to this.
I mentioned already in Seven Virtues of a Good Object that a good object name is not a job title, but I didn’t explain why I think so. Besides that, in Utility Classes Have Nothing to Do With Functional Programming, I tried to explain the difference between declarative and imperative programming paradigms. Now it’s time to put these two pieces together.
Let’s say I’m an object and you’re my client. You give me a bucket of apples and ask me to sort them by size. If I’m living in the world of imperative programming, you will get them sorted immediately, and we will never interact again. I will do my job just as requested, without even thinking why you need them sorted. I would be a sorter who doesn’t really care about your real intention:
List<Apple> sorted = new Sorter().sort(apples);
Apple biggest = sorted.get(0);As you see here, the real intention is to find the biggest apple in the bucket.
This is not what you would expect from a good business partner who can help you work with a bucket of apples.
Instead, if I lived in the world of declarative programming, I would tell you: “Consider them sorted; what do you want to do next?.” You, in turn, would tell me that you need the biggest apple now. And I would say, “No problem; here it is.” In order to return the biggest one, I would not sort them all. I would just go through them all one by one and select the biggest. This operation is much faster than sorting first and then selecting the first in the list.
In other words, I would silently not follow your instructions but would try to do my business my way. I would be a much smarter partner of yours than that imperative sorter. And I would become a real object that behaves like a sorted list of apples instead of a procedure that sorts:
List<Apple> sorted = new Sorted(apples);
Apple biggest = sorted.get(0);See the difference?
Pay special attention to the difference between the sorter and sorted names.
Let’s get back to class names. When you add the “-er” suffix to your class name, you’re immediately turning it into a dumb imperative executor of your will. You do not allow it to think and improvise. You expect it to do exactly what you want—sort, manage, control, print, write, combine, concatenate, etc.
An object is a living organism that doesn’t want to be told what to do. It wants to be an equal partner with other objects, exposing behavior according to its contract(s), a.k.a. interfaces in Java and C# or protocols in Swift.
Philosophically speaking, the “-er” suffix is a sign of disrespect toward the poor object.
" /> open source libraries you’re using? In pattern books? They are all wrong. What do they have in common? They all end in “-er.” And what’s wrong with that? They are not classes, and the objects they instantiate are not objects. Instead, they are collections of procedures pretending to be classes.
Peter Coad used to say: Challenge any class name that ends in “-er.” There are a few good articles about this subject, including Your Coding Conventions Are Hurting You by Carlo Pescio, One of the Best Bits of Programming Advice I Ever Got by Travis Griggs, and Naming Objects – Don’t Use ER in Your Object Names by Ben Hall. The main argument against this “-er” suffix is that “when you need a manager, it’s often a sign that the managed are just plain old data structures and that the manager is the smart procedure doing the real work.”
I totally agree but would like to add a few words to this.
I mentioned already in Seven Virtues of a Good Object that a good object name is not a job title, but I didn’t explain why I think so. Besides that, in Utility Classes Have Nothing to Do With Functional Programming, I tried to explain the difference between declarative and imperative programming paradigms. Now it’s time to put these two pieces together.
Let’s say I’m an object and you’re my client. You give me a bucket of apples and ask me to sort them by size. If I’m living in the world of imperative programming, you will get them sorted immediately, and we will never interact again. I will do my job just as requested, without even thinking why you need them sorted. I would be a sorter who doesn’t really care about your real intention:
List<Apple> sorted = new Sorter().sort(apples);
Apple biggest = sorted.get(0);As you see here, the real intention is to find the biggest apple in the bucket.
This is not what you would expect from a good business partner who can help you work with a bucket of apples.
Instead, if I lived in the world of declarative programming, I would tell you: “Consider them sorted; what do you want to do next?.” You, in turn, would tell me that you need the biggest apple now. And I would say, “No problem; here it is.” In order to return the biggest one, I would not sort them all. I would just go through them all one by one and select the biggest. This operation is much faster than sorting first and then selecting the first in the list.
In other words, I would silently not follow your instructions but would try to do my business my way. I would be a much smarter partner of yours than that imperative sorter. And I would become a real object that behaves like a sorted list of apples instead of a procedure that sorts:
List<Apple> sorted = new Sorted(apples);
Apple biggest = sorted.get(0);See the difference?
Pay special attention to the difference between the sorter and sorted names.
Let’s get back to class names. When you add the “-er” suffix to your class name, you’re immediately turning it into a dumb imperative executor of your will. You do not allow it to think and improvise. You expect it to do exactly what you want—sort, manage, control, print, write, combine, concatenate, etc.
An object is a living organism that doesn’t want to be told what to do. It wants to be an equal partner with other objects, exposing behavior according to its contract(s), a.k.a. interfaces in Java and C# or protocols in Swift.
Philosophically speaking, the “-er” suffix is a sign of disrespect toward the poor object.
"/>
https://www.yegor256.com/2015/03/09/objects-end-with-er.html
Don't Create Objects That End With -ER
- Yegor Bugayenko
- comments
- Translated:
- Chinese
- add yours!
Manager. Controller. Helper. Handler. Writer. Reader. Converter. Validator. Router. Dispatcher. Observer. Listener. Sorter. Encoder. Decoder. This is the class names hall of shame. Have you seen them in your code? In open source libraries you’re using? In pattern books? They are all wrong. What do they have in common? They all end in “-er.” And what’s wrong with that? They are not classes, and the objects they instantiate are not objects. Instead, they are collections of procedures pretending to be classes.

Peter Coad used to say: Challenge any class name that ends in “-er.” There are a few good articles about this subject, including Your Coding Conventions Are Hurting You by Carlo Pescio, One of the Best Bits of Programming Advice I Ever Got by Travis Griggs, and Naming Objects – Don’t Use ER in Your Object Names by Ben Hall. The main argument against this “-er” suffix is that “when you need a manager, it’s often a sign that the managed are just plain old data structures and that the manager is the smart procedure doing the real work.”
I totally agree but would like to add a few words to this.
I mentioned already in Seven Virtues of a Good Object that a good object name is not a job title, but I didn’t explain why I think so. Besides that, in Utility Classes Have Nothing to Do With Functional Programming, I tried to explain the difference between declarative and imperative programming paradigms. Now it’s time to put these two pieces together.
Let’s say I’m an object and you’re my client. You give me a bucket of apples and ask me to sort them by size. If I’m living in the world of imperative programming, you will get them sorted immediately, and we will never interact again. I will do my job just as requested, without even thinking why you need them sorted. I would be a sorter who doesn’t really care about your real intention:
List<Apple> sorted = new Sorter().sort(apples);
Apple biggest = sorted.get(0);As you see here, the real intention is to find the biggest apple in the bucket.
This is not what you would expect from a good business partner who can help you work with a bucket of apples.
Instead, if I lived in the world of declarative programming, I would tell you: “Consider them sorted; what do you want to do next?.” You, in turn, would tell me that you need the biggest apple now. And I would say, “No problem; here it is.” In order to return the biggest one, I would not sort them all. I would just go through them all one by one and select the biggest. This operation is much faster than sorting first and then selecting the first in the list.
In other words, I would silently not follow your instructions but would try to do my business my way. I would be a much smarter partner of yours than that imperative sorter. And I would become a real object that behaves like a sorted list of apples instead of a procedure that sorts:
List<Apple> sorted = new Sorted(apples);
Apple biggest = sorted.get(0);See the difference?
Pay special attention to the difference between the sorter and sorted names.
Let’s get back to class names. When you add the “-er” suffix to your class name, you’re immediately turning it into a dumb imperative executor of your will. You do not allow it to think and improvise. You expect it to do exactly what you want—sort, manage, control, print, write, combine, concatenate, etc.
An object is a living organism that doesn’t want to be told what to do. It wants to be an equal partner with other objects, exposing behavior according to its contract(s), a.k.a. interfaces in Java and C# or protocols in Swift.
Philosophically speaking, the “-er” suffix is a sign of disrespect toward the poor object.
Manager. Controller. Helper. Handler. Writer. Reader. Converter. Validator. Router. Dispatcher. Observer. Listener. Sorter. Encoder. Decoder. This is the class names hall of shame. Have you seen them in your code? In open source libraries you’re using? In pattern books? They are all wrong. What do they have in common? They all end in “-er.” And what’s wrong with that? They are not classes, and the objects they instantiate are not objects. Instead, they are collections of procedures pretending to be classes.

Peter Coad used to say: Challenge any class name that ends in “-er.” There are a few good articles about this subject, including Your Coding Conventions Are Hurting You by Carlo Pescio, One of the Best Bits of Programming Advice I Ever Got by Travis Griggs, and Naming Objects – Don’t Use ER in Your Object Names by Ben Hall. The main argument against this “-er” suffix is that “when you need a manager, it’s often a sign that the managed are just plain old data structures and that the manager is the smart procedure doing the real work.”
I totally agree but would like to add a few words to this.
I mentioned already in Seven Virtues of a Good Object that a good object name is not a job title, but I didn’t explain why I think so. Besides that, in Utility Classes Have Nothing to Do With Functional Programming, I tried to explain the difference between declarative and imperative programming paradigms. Now it’s time to put these two pieces together.
Let’s say I’m an object and you’re my client. You give me a bucket of apples and ask me to sort them by size. If I’m living in the world of imperative programming, you will get them sorted immediately, and we will never interact again. I will do my job just as requested, without even thinking why you need them sorted. I would be a sorter who doesn’t really care about your real intention:
List<Apple> sorted = new Sorter().sort(apples);
Apple biggest = sorted.get(0);As you see here, the real intention is to find the biggest apple in the bucket.
This is not what you would expect from a good business partner who can help you work with a bucket of apples.
Instead, if I lived in the world of declarative programming, I would tell you: “Consider them sorted; what do you want to do next?.” You, in turn, would tell me that you need the biggest apple now. And I would say, “No problem; here it is.” In order to return the biggest one, I would not sort them all. I would just go through them all one by one and select the biggest. This operation is much faster than sorting first and then selecting the first in the list.
In other words, I would silently not follow your instructions but would try to do my business my way. I would be a much smarter partner of yours than that imperative sorter. And I would become a real object that behaves like a sorted list of apples instead of a procedure that sorts:
List<Apple> sorted = new Sorted(apples);
Apple biggest = sorted.get(0);See the difference?
Pay special attention to the difference between the sorter and sorted names.
Let’s get back to class names. When you add the “-er” suffix to your class name, you’re immediately turning it into a dumb imperative executor of your will. You do not allow it to think and improvise. You expect it to do exactly what you want—sort, manage, control, print, write, combine, concatenate, etc.
An object is a living organism that doesn’t want to be told what to do. It wants to be an equal partner with other objects, exposing behavior according to its contract(s), a.k.a. interfaces in Java and C# or protocols in Swift.
Philosophically speaking, the “-er” suffix is a sign of disrespect toward the poor object.
Please, use syntax highlighting in your comments, to make them more readable.

First, a practical example. Here is an interface for an object that is supposed to read a text somewhere and return it:
interface Text {
String read();
}Here is an implementation that reads the text from a file:
final class TextInFile implements Text {
private final File file;
public TextInFile(final File src) {
this.file = src;
}
@Override
public String read() {
return new String(
Files.readAllBytes(this.file.toPath()), "UTF-8"
);
}
}And now the decorator, which is another implementation of Text that removes all unprintable characters from the text:
final class PrintableText implements Text {
private final Text origin;
public PrintableText(final Text text) {
this.origin = text;
}
@Override
public String read() {
return this.origin.read()
.replaceAll("[^\p{Print}]", "");
}
}Here is how I’m using it:
final Text text = new PrintableText(
new TextInFile(new File("/tmp/a.txt"))
);
String content = text.read();As you can see, the PrintableText doesn’t read the text from the file. It doesn’t really care where the text is coming from. It delegates text reading to the encapsulated instance of Text. How this encapsulated object will deal with the text and where it will get it doesn’t concern PrintableText.
Let’s continue and try to create an implementation of Text that will capitalize all letters in the text:
final class AllCapsText implements Text {
private final Text origin;
public AllCapsText(final Text text) {
this.origin = text;
}
@Override
public String read() {
return this.origin.read().toUpperCase(Locale.ENGLISH);
}
}How about a Text that trims the input:
final class TrimmedText implements Text {
private final Text origin;
public TrimmedText(final Text text) {
this.origin = text;
}
@Override
public String read() {
return this.origin.read().trim();
}
}I can go on and on with these decorators. I can create many of them, suitable for their own individual use cases. But let’s see how they all can play together. Let’s say I want to read the text from the file, capitalize it, trim it, and remove all unprintable characters. And I want to be declarative. Here is what I do:
final Text text = new AllCapsText(
new TrimmedText(
new PrintableText(
new TextInFile(new File("/tmp/a.txt"))
)
)
);
String content = text.read();First, I create an instance of Text, composing multiple decorators into a single object. I declaratively define the behavior of text without actually executing anything. Until method read() is called, the file is not touched and the processing of the text is not started. The object text is just a composition of decorators, not an executable procedure. Check out this article about declarative and imperative styles of programming: Utility Classes Have Nothing to Do With Functional Programming.
This design is much more flexible and reusable than a more traditional one, where the Text object is smart enough to perform all said operations. For example, class String from Java is a good example of a bad design. It has more than 20 utility methods that should have been provided as decorators instead: trim(), toUpperCase(), substring(), split(), and many others, for example. When I want to trim my string, uppercase it, and then split it into pieces, here is what my code will look like:
final String txt = "hello, world!";
final String[] parts = txt.trim().toUpperCase().split(" ");This is imperative and procedural programming. Composable decorators, on the other hand, would make this code object-oriented and declarative. Something like this would be great to have in Java instead (pseudo-code):
final String[] parts = new String.Split(
new String.UpperCased(
new String.Trimmed("hello, world!")
)
);To conclude, I recommend you think twice every time you add a new utility method to the interface/class. Try to avoid utility methods as much as possible, and use decorators instead. An ideal interface should contain only methods that you absolutely cannot remove. Everything else should be done through composable decorators.
" /> decorator pattern is my favorite among all other patterns I’m aware of. It is a very simple and yet very powerful mechanism to make your code highly cohesive and loosely coupled. However, I believe decorators are not used often enough. They should be everywhere, but they are not. The biggest advantage we get from decorators is that they make our code composable. That’s why the title of this post is composable decorators. Unfortunately, instead of decorators, we often use imperative utility methods, which make our code procedural rather than object-oriented.
First, a practical example. Here is an interface for an object that is supposed to read a text somewhere and return it:
interface Text {
String read();
}Here is an implementation that reads the text from a file:
final class TextInFile implements Text {
private final File file;
public TextInFile(final File src) {
this.file = src;
}
@Override
public String read() {
return new String(
Files.readAllBytes(this.file.toPath()), "UTF-8"
);
}
}And now the decorator, which is another implementation of Text that removes all unprintable characters from the text:
final class PrintableText implements Text {
private final Text origin;
public PrintableText(final Text text) {
this.origin = text;
}
@Override
public String read() {
return this.origin.read()
.replaceAll("[^\p{Print}]", "");
}
}Here is how I’m using it:
final Text text = new PrintableText(
new TextInFile(new File("/tmp/a.txt"))
);
String content = text.read();As you can see, the PrintableText doesn’t read the text from the file. It doesn’t really care where the text is coming from. It delegates text reading to the encapsulated instance of Text. How this encapsulated object will deal with the text and where it will get it doesn’t concern PrintableText.
Let’s continue and try to create an implementation of Text that will capitalize all letters in the text:
final class AllCapsText implements Text {
private final Text origin;
public AllCapsText(final Text text) {
this.origin = text;
}
@Override
public String read() {
return this.origin.read().toUpperCase(Locale.ENGLISH);
}
}How about a Text that trims the input:
final class TrimmedText implements Text {
private final Text origin;
public TrimmedText(final Text text) {
this.origin = text;
}
@Override
public String read() {
return this.origin.read().trim();
}
}I can go on and on with these decorators. I can create many of them, suitable for their own individual use cases. But let’s see how they all can play together. Let’s say I want to read the text from the file, capitalize it, trim it, and remove all unprintable characters. And I want to be declarative. Here is what I do:
final Text text = new AllCapsText(
new TrimmedText(
new PrintableText(
new TextInFile(new File("/tmp/a.txt"))
)
)
);
String content = text.read();First, I create an instance of Text, composing multiple decorators into a single object. I declaratively define the behavior of text without actually executing anything. Until method read() is called, the file is not touched and the processing of the text is not started. The object text is just a composition of decorators, not an executable procedure. Check out this article about declarative and imperative styles of programming: Utility Classes Have Nothing to Do With Functional Programming.
This design is much more flexible and reusable than a more traditional one, where the Text object is smart enough to perform all said operations. For example, class String from Java is a good example of a bad design. It has more than 20 utility methods that should have been provided as decorators instead: trim(), toUpperCase(), substring(), split(), and many others, for example. When I want to trim my string, uppercase it, and then split it into pieces, here is what my code will look like:
final String txt = "hello, world!";
final String[] parts = txt.trim().toUpperCase().split(" ");This is imperative and procedural programming. Composable decorators, on the other hand, would make this code object-oriented and declarative. Something like this would be great to have in Java instead (pseudo-code):
final String[] parts = new String.Split(
new String.UpperCased(
new String.Trimmed("hello, world!")
)
);To conclude, I recommend you think twice every time you add a new utility method to the interface/class. Try to avoid utility methods as much as possible, and use decorators instead. An ideal interface should contain only methods that you absolutely cannot remove. Everything else should be done through composable decorators.
"/>
https://www.yegor256.com/2015/02/26/composable-decorators.html
Composable Decorators vs. Imperative Utility Methods
- Yegor Bugayenko
- comments
The decorator pattern is my favorite among all other patterns I’m aware of. It is a very simple and yet very powerful mechanism to make your code highly cohesive and loosely coupled. However, I believe decorators are not used often enough. They should be everywhere, but they are not. The biggest advantage we get from decorators is that they make our code composable. That’s why the title of this post is composable decorators. Unfortunately, instead of decorators, we often use imperative utility methods, which make our code procedural rather than object-oriented.

First, a practical example. Here is an interface for an object that is supposed to read a text somewhere and return it:
interface Text {
String read();
}Here is an implementation that reads the text from a file:
final class TextInFile implements Text {
private final File file;
public TextInFile(final File src) {
this.file = src;
}
@Override
public String read() {
return new String(
Files.readAllBytes(this.file.toPath()), "UTF-8"
);
}
}And now the decorator, which is another implementation of Text that removes all unprintable characters from the text:
final class PrintableText implements Text {
private final Text origin;
public PrintableText(final Text text) {
this.origin = text;
}
@Override
public String read() {
return this.origin.read()
.replaceAll("[^\p{Print}]", "");
}
}Here is how I’m using it:
final Text text = new PrintableText(
new TextInFile(new File("/tmp/a.txt"))
);
String content = text.read();As you can see, the PrintableText doesn’t read the text from the file. It doesn’t really care where the text is coming from. It delegates text reading to the encapsulated instance of Text. How this encapsulated object will deal with the text and where it will get it doesn’t concern PrintableText.
Let’s continue and try to create an implementation of Text that will capitalize all letters in the text:
final class AllCapsText implements Text {
private final Text origin;
public AllCapsText(final Text text) {
this.origin = text;
}
@Override
public String read() {
return this.origin.read().toUpperCase(Locale.ENGLISH);
}
}How about a Text that trims the input:
final class TrimmedText implements Text {
private final Text origin;
public TrimmedText(final Text text) {
this.origin = text;
}
@Override
public String read() {
return this.origin.read().trim();
}
}I can go on and on with these decorators. I can create many of them, suitable for their own individual use cases. But let’s see how they all can play together. Let’s say I want to read the text from the file, capitalize it, trim it, and remove all unprintable characters. And I want to be declarative. Here is what I do:
final Text text = new AllCapsText(
new TrimmedText(
new PrintableText(
new TextInFile(new File("/tmp/a.txt"))
)
)
);
String content = text.read();First, I create an instance of Text, composing multiple decorators into a single object. I declaratively define the behavior of text without actually executing anything. Until method read() is called, the file is not touched and the processing of the text is not started. The object text is just a composition of decorators, not an executable procedure. Check out this article about declarative and imperative styles of programming: Utility Classes Have Nothing to Do With Functional Programming.
This design is much more flexible and reusable than a more traditional one, where the Text object is smart enough to perform all said operations. For example, class String from Java is a good example of a bad design. It has more than 20 utility methods that should have been provided as decorators instead: trim(), toUpperCase(), substring(), split(), and many others, for example. When I want to trim my string, uppercase it, and then split it into pieces, here is what my code will look like:
final String txt = "hello, world!";
final String[] parts = txt.trim().toUpperCase().split(" ");This is imperative and procedural programming. Composable decorators, on the other hand, would make this code object-oriented and declarative. Something like this would be great to have in Java instead (pseudo-code):
final String[] parts = new String.Split(
new String.UpperCased(
new String.Trimmed("hello, world!")
)
);To conclude, I recommend you think twice every time you add a new utility method to the interface/class. Try to avoid utility methods as much as possible, and use decorators instead. An ideal interface should contain only methods that you absolutely cannot remove. Everything else should be done through composable decorators.
The decorator pattern is my favorite among all other patterns I’m aware of. It is a very simple and yet very powerful mechanism to make your code highly cohesive and loosely coupled. However, I believe decorators are not used often enough. They should be everywhere, but they are not. The biggest advantage we get from decorators is that they make our code composable. That’s why the title of this post is composable decorators. Unfortunately, instead of decorators, we often use imperative utility methods, which make our code procedural rather than object-oriented.

First, a practical example. Here is an interface for an object that is supposed to read a text somewhere and return it:
interface Text {
String read();
}Here is an implementation that reads the text from a file:
final class TextInFile implements Text {
private final File file;
public TextInFile(final File src) {
this.file = src;
}
@Override
public String read() {
return new String(
Files.readAllBytes(this.file.toPath()), "UTF-8"
);
}
}And now the decorator, which is another implementation of Text that removes all unprintable characters from the text:
final class PrintableText implements Text {
private final Text origin;
public PrintableText(final Text text) {
this.origin = text;
}
@Override
public String read() {
return this.origin.read()
.replaceAll("[^\p{Print}]", "");
}
}Here is how I’m using it:
final Text text = new PrintableText(
new TextInFile(new File("/tmp/a.txt"))
);
String content = text.read();As you can see, the PrintableText doesn’t read the text from the file. It doesn’t really care where the text is coming from. It delegates text reading to the encapsulated instance of Text. How this encapsulated object will deal with the text and where it will get it doesn’t concern PrintableText.
Let’s continue and try to create an implementation of Text that will capitalize all letters in the text:
final class AllCapsText implements Text {
private final Text origin;
public AllCapsText(final Text text) {
this.origin = text;
}
@Override
public String read() {
return this.origin.read().toUpperCase(Locale.ENGLISH);
}
}How about a Text that trims the input:
final class TrimmedText implements Text {
private final Text origin;
public TrimmedText(final Text text) {
this.origin = text;
}
@Override
public String read() {
return this.origin.read().trim();
}
}I can go on and on with these decorators. I can create many of them, suitable for their own individual use cases. But let’s see how they all can play together. Let’s say I want to read the text from the file, capitalize it, trim it, and remove all unprintable characters. And I want to be declarative. Here is what I do:
final Text text = new AllCapsText(
new TrimmedText(
new PrintableText(
new TextInFile(new File("/tmp/a.txt"))
)
)
);
String content = text.read();First, I create an instance of Text, composing multiple decorators into a single object. I declaratively define the behavior of text without actually executing anything. Until method read() is called, the file is not touched and the processing of the text is not started. The object text is just a composition of decorators, not an executable procedure. Check out this article about declarative and imperative styles of programming: Utility Classes Have Nothing to Do With Functional Programming.
This design is much more flexible and reusable than a more traditional one, where the Text object is smart enough to perform all said operations. For example, class String from Java is a good example of a bad design. It has more than 20 utility methods that should have been provided as decorators instead: trim(), toUpperCase(), substring(), split(), and many others, for example. When I want to trim my string, uppercase it, and then split it into pieces, here is what my code will look like:
final String txt = "hello, world!";
final String[] parts = txt.trim().toUpperCase().split(" ");This is imperative and procedural programming. Composable decorators, on the other hand, would make this code object-oriented and declarative. Something like this would be great to have in Java instead (pseudo-code):
final String[] parts = new String.Split(
new String.UpperCased(
new String.Trimmed("hello, world!")
)
);To conclude, I recommend you think twice every time you add a new utility method to the interface/class. Try to avoid utility methods as much as possible, and use decorators instead. An ideal interface should contain only methods that you absolutely cannot remove. Everything else should be done through composable decorators.
Please, use syntax highlighting in your comments, to make them more readable.
In Java, there are basically two valid alternatives to these ugly utility classes aggressively promoted by Guava, Apache Commons, and others. The first one is the use of traditional classes, and the second one is Java 8 lambda. Now let’s see why utility classes are not even close to functional programming and where this misconception is coming from.

Here is a typical example of a utility class Math from Java 1.0:
public class Math {
public static double abs(double a);
// a few dozens of other methods of the same style
}Here is how you would use it when you want to calculate an absolute value of a floating point number:
double x = Math.abs(3.1415926d);What’s wrong with it? We need a function, and we get it from class Math. The class has many useful functions inside it that can be used for many typical mathematical operations, like calculating maximum, minimum, sine, cosine, etc. It is a very popular concept; just look at any commercial or open source product. These utility classes are used everywhere since Java was invented (this Math class was introduced in Java’s first version). Well, technically there is nothing wrong. The code will work. But it is not object-oriented programming. Instead, it is imperative and procedural. Do we care? Well, it’s up to you to decide. Let’s see what the difference is.
There are basically two different approaches: declarative and imperative.
Imperative programming is focused on describing how a program operates in terms of statements that change a program state. We just saw an example of imperative programming above. Here is another (this is pure imperative/procedural programming that has nothing to do with OOP):
public class MyMath {
public double f(double a, double b) {
double max = Math.max(a, b);
double x = Math.abs(max);
return x;
}
}Declarative programming focuses on what the program should accomplish without prescribing how to do it in terms of sequences of actions to be taken. This is how the same code would look in Lisp, a functional programming language:
(defun f (a b) (abs (max a b)))What’s the catch? Just a difference in syntax? Not really.
There are many definitions of the difference between imperative and declarative styles, but I will try to give my own. There are basically three roles interacting in the scenario with this f function/method: a buyer, a packager of the result, and a consumer of the result. Let’s say I call this function like this:
public void foo() {
double x = this.calc(5, -7);
System.out.println("max+abs equals to " + x);
}
private double calc(double a, double b) {
double x = Math.f(a, b);
return x;
}Here, method calc() is a buyer, method Math.f() is a packager of the result, and method foo() is a consumer. No matter which programming style is used, there are always these three guys participating in the process: the buyer, the packager, and the consumer.
Imagine you’re a buyer and want to purchase a gift for your (girl|boy)friend. The first option is to visit a shop, pay $50, let them package that perfume for you, and then deliver it to the friend (and get a kiss in return). This is an imperative style.
The second option is to visit a shop, pay $50, and get a gift card. You then present this card to the friend (and get a kiss in return). When he or she decides to convert it to perfume, he or she will visit the shop and get it. This is a declarative style.
See the difference?
In the first case, which is imperative, you force the packager (a beauty shop) to find that perfume in stock, package it, and present it to you as a ready-to-be-used product. In the second scenario, which is declarative, you’re just getting a promise from the shop that eventually, when it’s necessary, the staff will find the perfume in stock, package it, and provide it to those who need it. If your friend never visits the shop with that gift card, the perfume will remain in stock.
Moreover, your friend can use that gift card as a product itself, never visiting the shop. He or she may instead present it to somebody else as a gift or just exchange it for another card or product. The gift card itself becomes a product!
So the difference is what the consumer is getting—either a product ready to be used (imperative) or a voucher for the product, which can later be converted into a real product (declarative).
Utility classes, like Math from JDK or StringUtils from Apache Commons, return products ready to be used immediately, while functions in Lisp and other functional languages return “vouchers.” For example, if you call the max function in Lisp, the actual maximum between two numbers will only be calculated when you actually start using it:
(let (x (max 1 5))
(print "X equals to " x))Until this print actually starts to output characters to the screen, the function max won’t be called. This x is a “voucher” returned to you when you attempted to “buy” a maximum between 1 and 5.
Note, however, that nesting Java static functions one into another doesn’t make them declarative. The code is still imperative, because its execution delivers the result here and now:
public class MyMath {
public double f(double a, double b) {
return Math.abs(Math.max(a, b));
}
}“Okay,” you may say, “I got it, but why is declarative style better than imperative? What’s the big deal?” I’m getting to it. Let me first show the difference between functions in functional programming and static methods in OOP. As mentioned above, this is the second big difference between utility classes and functional programming.
In any functional programming language, you can do this:
(defun foo (x) (x 5))Then, later, you can call that x:
(defun bar (x) (+ x 1)) // defining function bar
(print (foo bar)) // passing bar as an argument to fooStatic methods in Java are not functions in terms of functional programming. You can’t do anything like this with a static method. You can’t pass a static method as an argument to another method. Basically, static methods are procedures or, simply put, Java statements grouped under a unique name. The only way to access them is to call a procedure and pass all necessary arguments to it. The procedure will calculate something and return a result that is immediately ready for usage.
And now we’re getting to the final question I can hear you asking: “Okay, utility classes are not functional programming, but they look like functional programming, they work very fast, and they are very easy to use. Why not use them? Why aim for perfection when 20 years of Java history proves that utility classes are the main instrument of each Java developer?”
Besides OOP fundamentalism, which I’m very often accused of, there are a few very practical reasons (BTW, I am an OOP fundamentalist):
Testability. Calls to static methods in utility classes are hard-coded dependencies that can never be broken for testing purposes. If your class is calling
FileUtils.readFile(), I will never be able to test it without using a real file on disk.Efficiency. Utility classes, due to their imperative nature, are much less efficient than their declarative alternatives. They simply do all calculations right here and now, taking processor resources even when it’s not yet necessary. Instead of returning a promise to break down a string into chunks,
StringUtils.split()breaks it down right now. And it breaks it down into all possible chunks, even if only the first one is required by the “buyer.”Readability. Utility classes tend to be huge (try to read the source code of
StringUtilsorFileUtilsfrom Apache Commons). The entire idea of separation of concerns, which makes OOP so beautiful, is absent in utility classes. They just put all possible procedures into one huge.javafile, which becomes absolutely unmaintainable when it surpasses a dozen static methods.
To conclude, let me reiterate: Utility classes have nothing to do with functional programming. They are simply bags of static methods, which are imperative procedures. Try to stay as far as possible away from them and use solid, cohesive objects no matter how many of them you have to declare and how small they are.
" /> accused of being against functional programming because I call utility classes an anti-pattern. That’s absolutely wrong! Well, I do consider them a terrible anti-pattern, but they have nothing to do with functional programming. I believe there are two basic reasons why. First, functional programming is declarative, while utility class methods are imperative. Second, functional programming is based on lambda calculus, where a function can be assigned to a variable. Utility class methods are not functions in this sense. I’ll decode these statements in a minute.In Java, there are basically two valid alternatives to these ugly utility classes aggressively promoted by Guava, Apache Commons, and others. The first one is the use of traditional classes, and the second one is Java 8 lambda. Now let’s see why utility classes are not even close to functional programming and where this misconception is coming from.

Here is a typical example of a utility class Math from Java 1.0:
public class Math {
public static double abs(double a);
// a few dozens of other methods of the same style
}Here is how you would use it when you want to calculate an absolute value of a floating point number:
double x = Math.abs(3.1415926d);What’s wrong with it? We need a function, and we get it from class Math. The class has many useful functions inside it that can be used for many typical mathematical operations, like calculating maximum, minimum, sine, cosine, etc. It is a very popular concept; just look at any commercial or open source product. These utility classes are used everywhere since Java was invented (this Math class was introduced in Java’s first version). Well, technically there is nothing wrong. The code will work. But it is not object-oriented programming. Instead, it is imperative and procedural. Do we care? Well, it’s up to you to decide. Let’s see what the difference is.
There are basically two different approaches: declarative and imperative.
Imperative programming is focused on describing how a program operates in terms of statements that change a program state. We just saw an example of imperative programming above. Here is another (this is pure imperative/procedural programming that has nothing to do with OOP):
public class MyMath {
public double f(double a, double b) {
double max = Math.max(a, b);
double x = Math.abs(max);
return x;
}
}Declarative programming focuses on what the program should accomplish without prescribing how to do it in terms of sequences of actions to be taken. This is how the same code would look in Lisp, a functional programming language:
(defun f (a b) (abs (max a b)))What’s the catch? Just a difference in syntax? Not really.
There are many definitions of the difference between imperative and declarative styles, but I will try to give my own. There are basically three roles interacting in the scenario with this f function/method: a buyer, a packager of the result, and a consumer of the result. Let’s say I call this function like this:
public void foo() {
double x = this.calc(5, -7);
System.out.println("max+abs equals to " + x);
}
private double calc(double a, double b) {
double x = Math.f(a, b);
return x;
}Here, method calc() is a buyer, method Math.f() is a packager of the result, and method foo() is a consumer. No matter which programming style is used, there are always these three guys participating in the process: the buyer, the packager, and the consumer.
Imagine you’re a buyer and want to purchase a gift for your (girl|boy)friend. The first option is to visit a shop, pay $50, let them package that perfume for you, and then deliver it to the friend (and get a kiss in return). This is an imperative style.
The second option is to visit a shop, pay $50, and get a gift card. You then present this card to the friend (and get a kiss in return). When he or she decides to convert it to perfume, he or she will visit the shop and get it. This is a declarative style.
See the difference?
In the first case, which is imperative, you force the packager (a beauty shop) to find that perfume in stock, package it, and present it to you as a ready-to-be-used product. In the second scenario, which is declarative, you’re just getting a promise from the shop that eventually, when it’s necessary, the staff will find the perfume in stock, package it, and provide it to those who need it. If your friend never visits the shop with that gift card, the perfume will remain in stock.
Moreover, your friend can use that gift card as a product itself, never visiting the shop. He or she may instead present it to somebody else as a gift or just exchange it for another card or product. The gift card itself becomes a product!
So the difference is what the consumer is getting—either a product ready to be used (imperative) or a voucher for the product, which can later be converted into a real product (declarative).
Utility classes, like Math from JDK or StringUtils from Apache Commons, return products ready to be used immediately, while functions in Lisp and other functional languages return “vouchers.” For example, if you call the max function in Lisp, the actual maximum between two numbers will only be calculated when you actually start using it:
(let (x (max 1 5))
(print "X equals to " x))Until this print actually starts to output characters to the screen, the function max won’t be called. This x is a “voucher” returned to you when you attempted to “buy” a maximum between 1 and 5.
Note, however, that nesting Java static functions one into another doesn’t make them declarative. The code is still imperative, because its execution delivers the result here and now:
public class MyMath {
public double f(double a, double b) {
return Math.abs(Math.max(a, b));
}
}“Okay,” you may say, “I got it, but why is declarative style better than imperative? What’s the big deal?” I’m getting to it. Let me first show the difference between functions in functional programming and static methods in OOP. As mentioned above, this is the second big difference between utility classes and functional programming.
In any functional programming language, you can do this:
(defun foo (x) (x 5))Then, later, you can call that x:
(defun bar (x) (+ x 1)) // defining function bar
(print (foo bar)) // passing bar as an argument to fooStatic methods in Java are not functions in terms of functional programming. You can’t do anything like this with a static method. You can’t pass a static method as an argument to another method. Basically, static methods are procedures or, simply put, Java statements grouped under a unique name. The only way to access them is to call a procedure and pass all necessary arguments to it. The procedure will calculate something and return a result that is immediately ready for usage.
And now we’re getting to the final question I can hear you asking: “Okay, utility classes are not functional programming, but they look like functional programming, they work very fast, and they are very easy to use. Why not use them? Why aim for perfection when 20 years of Java history proves that utility classes are the main instrument of each Java developer?”
Besides OOP fundamentalism, which I’m very often accused of, there are a few very practical reasons (BTW, I am an OOP fundamentalist):
Testability. Calls to static methods in utility classes are hard-coded dependencies that can never be broken for testing purposes. If your class is calling
FileUtils.readFile(), I will never be able to test it without using a real file on disk.Efficiency. Utility classes, due to their imperative nature, are much less efficient than their declarative alternatives. They simply do all calculations right here and now, taking processor resources even when it’s not yet necessary. Instead of returning a promise to break down a string into chunks,
StringUtils.split()breaks it down right now. And it breaks it down into all possible chunks, even if only the first one is required by the “buyer.”Readability. Utility classes tend to be huge (try to read the source code of
StringUtilsorFileUtilsfrom Apache Commons). The entire idea of separation of concerns, which makes OOP so beautiful, is absent in utility classes. They just put all possible procedures into one huge.javafile, which becomes absolutely unmaintainable when it surpasses a dozen static methods.
To conclude, let me reiterate: Utility classes have nothing to do with functional programming. They are simply bags of static methods, which are imperative procedures. Try to stay as far as possible away from them and use solid, cohesive objects no matter how many of them you have to declare and how small they are.
"/>
https://www.yegor256.com/2015/02/20/utility-classes-vs-functional-programming.html
Utility Classes Have Nothing to Do With Functional Programming
- Yegor Bugayenko
- comments
I was recently accused of being against functional programming because I call utility classes an anti-pattern. That’s absolutely wrong! Well, I do consider them a terrible anti-pattern, but they have nothing to do with functional programming. I believe there are two basic reasons why. First, functional programming is declarative, while utility class methods are imperative. Second, functional programming is based on lambda calculus, where a function can be assigned to a variable. Utility class methods are not functions in this sense. I’ll decode these statements in a minute.
In Java, there are basically two valid alternatives to these ugly utility classes aggressively promoted by Guava, Apache Commons, and others. The first one is the use of traditional classes, and the second one is Java 8 lambda. Now let’s see why utility classes are not even close to functional programming and where this misconception is coming from.

Here is a typical example of a utility class Math from Java 1.0:
public class Math {
public static double abs(double a);
// a few dozens of other methods of the same style
}Here is how you would use it when you want to calculate an absolute value of a floating point number:
double x = Math.abs(3.1415926d);What’s wrong with it? We need a function, and we get it from class Math. The class has many useful functions inside it that can be used for many typical mathematical operations, like calculating maximum, minimum, sine, cosine, etc. It is a very popular concept; just look at any commercial or open source product. These utility classes are used everywhere since Java was invented (this Math class was introduced in Java’s first version). Well, technically there is nothing wrong. The code will work. But it is not object-oriented programming. Instead, it is imperative and procedural. Do we care? Well, it’s up to you to decide. Let’s see what the difference is.
There are basically two different approaches: declarative and imperative.
Imperative programming is focused on describing how a program operates in terms of statements that change a program state. We just saw an example of imperative programming above. Here is another (this is pure imperative/procedural programming that has nothing to do with OOP):
public class MyMath {
public double f(double a, double b) {
double max = Math.max(a, b);
double x = Math.abs(max);
return x;
}
}Declarative programming focuses on what the program should accomplish without prescribing how to do it in terms of sequences of actions to be taken. This is how the same code would look in Lisp, a functional programming language:
(defun f (a b) (abs (max a b)))What’s the catch? Just a difference in syntax? Not really.
There are many definitions of the difference between imperative and declarative styles, but I will try to give my own. There are basically three roles interacting in the scenario with this f function/method: a buyer, a packager of the result, and a consumer of the result. Let’s say I call this function like this:
public void foo() {
double x = this.calc(5, -7);
System.out.println("max+abs equals to " + x);
}
private double calc(double a, double b) {
double x = Math.f(a, b);
return x;
}Here, method calc() is a buyer, method Math.f() is a packager of the result, and method foo() is a consumer. No matter which programming style is used, there are always these three guys participating in the process: the buyer, the packager, and the consumer.
Imagine you’re a buyer and want to purchase a gift for your (girl|boy)friend. The first option is to visit a shop, pay $50, let them package that perfume for you, and then deliver it to the friend (and get a kiss in return). This is an imperative style.
The second option is to visit a shop, pay $50, and get a gift card. You then present this card to the friend (and get a kiss in return). When he or she decides to convert it to perfume, he or she will visit the shop and get it. This is a declarative style.
See the difference?
In the first case, which is imperative, you force the packager (a beauty shop) to find that perfume in stock, package it, and present it to you as a ready-to-be-used product. In the second scenario, which is declarative, you’re just getting a promise from the shop that eventually, when it’s necessary, the staff will find the perfume in stock, package it, and provide it to those who need it. If your friend never visits the shop with that gift card, the perfume will remain in stock.
Moreover, your friend can use that gift card as a product itself, never visiting the shop. He or she may instead present it to somebody else as a gift or just exchange it for another card or product. The gift card itself becomes a product!
So the difference is what the consumer is getting—either a product ready to be used (imperative) or a voucher for the product, which can later be converted into a real product (declarative).
Utility classes, like Math from JDK or StringUtils from Apache Commons, return products ready to be used immediately, while functions in Lisp and other functional languages return “vouchers.” For example, if you call the max function in Lisp, the actual maximum between two numbers will only be calculated when you actually start using it:
(let (x (max 1 5))
(print "X equals to " x))Until this print actually starts to output characters to the screen, the function max won’t be called. This x is a “voucher” returned to you when you attempted to “buy” a maximum between 1 and 5.
Note, however, that nesting Java static functions one into another doesn’t make them declarative. The code is still imperative, because its execution delivers the result here and now:
public class MyMath {
public double f(double a, double b) {
return Math.abs(Math.max(a, b));
}
}“Okay,” you may say, “I got it, but why is declarative style better than imperative? What’s the big deal?” I’m getting to it. Let me first show the difference between functions in functional programming and static methods in OOP. As mentioned above, this is the second big difference between utility classes and functional programming.
In any functional programming language, you can do this:
(defun foo (x) (x 5))Then, later, you can call that x:
(defun bar (x) (+ x 1)) // defining function bar
(print (foo bar)) // passing bar as an argument to fooStatic methods in Java are not functions in terms of functional programming. You can’t do anything like this with a static method. You can’t pass a static method as an argument to another method. Basically, static methods are procedures or, simply put, Java statements grouped under a unique name. The only way to access them is to call a procedure and pass all necessary arguments to it. The procedure will calculate something and return a result that is immediately ready for usage.
And now we’re getting to the final question I can hear you asking: “Okay, utility classes are not functional programming, but they look like functional programming, they work very fast, and they are very easy to use. Why not use them? Why aim for perfection when 20 years of Java history proves that utility classes are the main instrument of each Java developer?”
Besides OOP fundamentalism, which I’m very often accused of, there are a few very practical reasons (BTW, I am an OOP fundamentalist):
Testability. Calls to static methods in utility classes are hard-coded dependencies that can never be broken for testing purposes. If your class is calling
FileUtils.readFile(), I will never be able to test it without using a real file on disk.Efficiency. Utility classes, due to their imperative nature, are much less efficient than their declarative alternatives. They simply do all calculations right here and now, taking processor resources even when it’s not yet necessary. Instead of returning a promise to break down a string into chunks,
StringUtils.split()breaks it down right now. And it breaks it down into all possible chunks, even if only the first one is required by the “buyer.”Readability. Utility classes tend to be huge (try to read the source code of
StringUtilsorFileUtilsfrom Apache Commons). The entire idea of separation of concerns, which makes OOP so beautiful, is absent in utility classes. They just put all possible procedures into one huge.javafile, which becomes absolutely unmaintainable when it surpasses a dozen static methods.
To conclude, let me reiterate: Utility classes have nothing to do with functional programming. They are simply bags of static methods, which are imperative procedures. Try to stay as far as possible away from them and use solid, cohesive objects no matter how many of them you have to declare and how small they are.
I was recently accused of being against functional programming because I call utility classes an anti-pattern. That’s absolutely wrong! Well, I do consider them a terrible anti-pattern, but they have nothing to do with functional programming. I believe there are two basic reasons why. First, functional programming is declarative, while utility class methods are imperative. Second, functional programming is based on lambda calculus, where a function can be assigned to a variable. Utility class methods are not functions in this sense. I’ll decode these statements in a minute.
In Java, there are basically two valid alternatives to these ugly utility classes aggressively promoted by Guava, Apache Commons, and others. The first one is the use of traditional classes, and the second one is Java 8 lambda. Now let’s see why utility classes are not even close to functional programming and where this misconception is coming from.

Here is a typical example of a utility class Math from Java 1.0:
public class Math {
public static double abs(double a);
// a few dozens of other methods of the same style
}Here is how you would use it when you want to calculate an absolute value of a floating point number:
double x = Math.abs(3.1415926d);What’s wrong with it? We need a function, and we get it from class Math. The class has many useful functions inside it that can be used for many typical mathematical operations, like calculating maximum, minimum, sine, cosine, etc. It is a very popular concept; just look at any commercial or open source product. These utility classes are used everywhere since Java was invented (this Math class was introduced in Java’s first version). Well, technically there is nothing wrong. The code will work. But it is not object-oriented programming. Instead, it is imperative and procedural. Do we care? Well, it’s up to you to decide. Let’s see what the difference is.
There are basically two different approaches: declarative and imperative.
Imperative programming is focused on describing how a program operates in terms of statements that change a program state. We just saw an example of imperative programming above. Here is another (this is pure imperative/procedural programming that has nothing to do with OOP):
public class MyMath {
public double f(double a, double b) {
double max = Math.max(a, b);
double x = Math.abs(max);
return x;
}
}Declarative programming focuses on what the program should accomplish without prescribing how to do it in terms of sequences of actions to be taken. This is how the same code would look in Lisp, a functional programming language:
(defun f (a b) (abs (max a b)))What’s the catch? Just a difference in syntax? Not really.
There are many definitions of the difference between imperative and declarative styles, but I will try to give my own. There are basically three roles interacting in the scenario with this f function/method: a buyer, a packager of the result, and a consumer of the result. Let’s say I call this function like this:
public void foo() {
double x = this.calc(5, -7);
System.out.println("max+abs equals to " + x);
}
private double calc(double a, double b) {
double x = Math.f(a, b);
return x;
}Here, method calc() is a buyer, method Math.f() is a packager of the result, and method foo() is a consumer. No matter which programming style is used, there are always these three guys participating in the process: the buyer, the packager, and the consumer.
Imagine you’re a buyer and want to purchase a gift for your (girl|boy)friend. The first option is to visit a shop, pay $50, let them package that perfume for you, and then deliver it to the friend (and get a kiss in return). This is an imperative style.
The second option is to visit a shop, pay $50, and get a gift card. You then present this card to the friend (and get a kiss in return). When he or she decides to convert it to perfume, he or she will visit the shop and get it. This is a declarative style.
See the difference?
In the first case, which is imperative, you force the packager (a beauty shop) to find that perfume in stock, package it, and present it to you as a ready-to-be-used product. In the second scenario, which is declarative, you’re just getting a promise from the shop that eventually, when it’s necessary, the staff will find the perfume in stock, package it, and provide it to those who need it. If your friend never visits the shop with that gift card, the perfume will remain in stock.
Moreover, your friend can use that gift card as a product itself, never visiting the shop. He or she may instead present it to somebody else as a gift or just exchange it for another card or product. The gift card itself becomes a product!
So the difference is what the consumer is getting—either a product ready to be used (imperative) or a voucher for the product, which can later be converted into a real product (declarative).
Utility classes, like Math from JDK or StringUtils from Apache Commons, return products ready to be used immediately, while functions in Lisp and other functional languages return “vouchers.” For example, if you call the max function in Lisp, the actual maximum between two numbers will only be calculated when you actually start using it:
(let (x (max 1 5))
(print "X equals to " x))Until this print actually starts to output characters to the screen, the function max won’t be called. This x is a “voucher” returned to you when you attempted to “buy” a maximum between 1 and 5.
Note, however, that nesting Java static functions one into another doesn’t make them declarative. The code is still imperative, because its execution delivers the result here and now:
public class MyMath {
public double f(double a, double b) {
return Math.abs(Math.max(a, b));
}
}“Okay,” you may say, “I got it, but why is declarative style better than imperative? What’s the big deal?” I’m getting to it. Let me first show the difference between functions in functional programming and static methods in OOP. As mentioned above, this is the second big difference between utility classes and functional programming.
In any functional programming language, you can do this:
(defun foo (x) (x 5))Then, later, you can call that x:
(defun bar (x) (+ x 1)) // defining function bar
(print (foo bar)) // passing bar as an argument to fooStatic methods in Java are not functions in terms of functional programming. You can’t do anything like this with a static method. You can’t pass a static method as an argument to another method. Basically, static methods are procedures or, simply put, Java statements grouped under a unique name. The only way to access them is to call a procedure and pass all necessary arguments to it. The procedure will calculate something and return a result that is immediately ready for usage.
And now we’re getting to the final question I can hear you asking: “Okay, utility classes are not functional programming, but they look like functional programming, they work very fast, and they are very easy to use. Why not use them? Why aim for perfection when 20 years of Java history proves that utility classes are the main instrument of each Java developer?”
Besides OOP fundamentalism, which I’m very often accused of, there are a few very practical reasons (BTW, I am an OOP fundamentalist):
Testability. Calls to static methods in utility classes are hard-coded dependencies that can never be broken for testing purposes. If your class is calling
FileUtils.readFile(), I will never be able to test it without using a real file on disk.Efficiency. Utility classes, due to their imperative nature, are much less efficient than their declarative alternatives. They simply do all calculations right here and now, taking processor resources even when it’s not yet necessary. Instead of returning a promise to break down a string into chunks,
StringUtils.split()breaks it down right now. And it breaks it down into all possible chunks, even if only the first one is required by the “buyer.”Readability. Utility classes tend to be huge (try to read the source code of
StringUtilsorFileUtilsfrom Apache Commons). The entire idea of separation of concerns, which makes OOP so beautiful, is absent in utility classes. They just put all possible procedures into one huge.javafile, which becomes absolutely unmaintainable when it surpasses a dozen static methods.
To conclude, let me reiterate: Utility classes have nothing to do with functional programming. They are simply bags of static methods, which are imperative procedures. Try to stay as far as possible away from them and use solid, cohesive objects no matter how many of them you have to declare and how small they are.
Please, use syntax highlighting in your comments, to make them more readable.
if (x < 0) {
throw new Exception("X can't be negative");
} else {
System.out.println("X is positive or zero");
}I have been trying to find a proper metaphor to explain its incorrectness. Today I finally found it.
If-then-else is a forking mechanism of procedural programming. The CPU either goes to the left and then does something or goes to the right and does something else. Imagine yourself driving a car and seeing this sign:

It looks logical, doesn’t it? You can go in the left lane if you’re not driving a truck. Otherwise you should go in the right lane. Both lanes meet up in a while. No matter which one you choose, you will end up on the same road. This is what this code block does:
if (x < 0) {
System.out.println("X is negative");
} else {
System.out.println("X is positive or zero");
}Now, try to imagine this sign:

It looks very strange to me, and you will never see this sign anywhere simply because a dead end means an end, a full stop, a finish. What is the point of drawing a lane after the dead end sign? There is no point.
This is how a proper sign would look:

This is how a proper code block would look:
if (x < 0) {
throw new Exception("X can't be negative");
}
System.out.println("X is positive or zero");The same is true for loops. This is wrong:
for (int x : numbers) {
if (x < 0) {
continue;
} else {
System.out.println("found positive number");
}
}While this is right:
for (int x : numbers) {
if (x < 0) {
continue;
}
System.out.println("found positive number");
}There is no road after the dead end! If you draw it, your code looks like this very funny snippet I found a few years ago reviewing sources written by some very well-paid developer in one very serious company:
if (x < 0) {
throw new Exception("X is negative");
System.exit(1);
}Don’t do this.
" />if (x < 0) {
throw new Exception("X can't be negative");
} else {
System.out.println("X is positive or zero");
}I have been trying to find a proper metaphor to explain its incorrectness. Today I finally found it.
If-then-else is a forking mechanism of procedural programming. The CPU either goes to the left and then does something or goes to the right and does something else. Imagine yourself driving a car and seeing this sign:

It looks logical, doesn’t it? You can go in the left lane if you’re not driving a truck. Otherwise you should go in the right lane. Both lanes meet up in a while. No matter which one you choose, you will end up on the same road. This is what this code block does:
if (x < 0) {
System.out.println("X is negative");
} else {
System.out.println("X is positive or zero");
}Now, try to imagine this sign:

It looks very strange to me, and you will never see this sign anywhere simply because a dead end means an end, a full stop, a finish. What is the point of drawing a lane after the dead end sign? There is no point.
This is how a proper sign would look:

This is how a proper code block would look:
if (x < 0) {
throw new Exception("X can't be negative");
}
System.out.println("X is positive or zero");The same is true for loops. This is wrong:
for (int x : numbers) {
if (x < 0) {
continue;
} else {
System.out.println("found positive number");
}
}While this is right:
for (int x : numbers) {
if (x < 0) {
continue;
}
System.out.println("found positive number");
}There is no road after the dead end! If you draw it, your code looks like this very funny snippet I found a few years ago reviewing sources written by some very well-paid developer in one very serious company:
if (x < 0) {
throw new Exception("X is negative");
System.exit(1);
}Don’t do this.
"/>
https://www.yegor256.com/2015/01/21/if-then-throw-else.html
If. Then. Throw. Else. WTF?
- Yegor Bugayenko
- comments
This is the code I could never understand:
if (x < 0) {
throw new Exception("X can't be negative");
} else {
System.out.println("X is positive or zero");
}I have been trying to find a proper metaphor to explain its incorrectness. Today I finally found it.
If-then-else is a forking mechanism of procedural programming. The CPU either goes to the left and then does something or goes to the right and does something else. Imagine yourself driving a car and seeing this sign:

It looks logical, doesn’t it? You can go in the left lane if you’re not driving a truck. Otherwise you should go in the right lane. Both lanes meet up in a while. No matter which one you choose, you will end up on the same road. This is what this code block does:
if (x < 0) {
System.out.println("X is negative");
} else {
System.out.println("X is positive or zero");
}Now, try to imagine this sign:

It looks very strange to me, and you will never see this sign anywhere simply because a dead end means an end, a full stop, a finish. What is the point of drawing a lane after the dead end sign? There is no point.
This is how a proper sign would look:

This is how a proper code block would look:
if (x < 0) {
throw new Exception("X can't be negative");
}
System.out.println("X is positive or zero");The same is true for loops. This is wrong:
for (int x : numbers) {
if (x < 0) {
continue;
} else {
System.out.println("found positive number");
}
}While this is right:
for (int x : numbers) {
if (x < 0) {
continue;
}
System.out.println("found positive number");
}There is no road after the dead end! If you draw it, your code looks like this very funny snippet I found a few years ago reviewing sources written by some very well-paid developer in one very serious company:
if (x < 0) {
throw new Exception("X is negative");
System.exit(1);
}Don’t do this.
This is the code I could never understand:
if (x < 0) {
throw new Exception("X can't be negative");
} else {
System.out.println("X is positive or zero");
}I have been trying to find a proper metaphor to explain its incorrectness. Today I finally found it.
If-then-else is a forking mechanism of procedural programming. The CPU either goes to the left and then does something or goes to the right and does something else. Imagine yourself driving a car and seeing this sign:

It looks logical, doesn’t it? You can go in the left lane if you’re not driving a truck. Otherwise you should go in the right lane. Both lanes meet up in a while. No matter which one you choose, you will end up on the same road. This is what this code block does:
if (x < 0) {
System.out.println("X is negative");
} else {
System.out.println("X is positive or zero");
}Now, try to imagine this sign:

It looks very strange to me, and you will never see this sign anywhere simply because a dead end means an end, a full stop, a finish. What is the point of drawing a lane after the dead end sign? There is no point.
This is how a proper sign would look:

This is how a proper code block would look:
if (x < 0) {
throw new Exception("X can't be negative");
}
System.out.println("X is positive or zero");The same is true for loops. This is wrong:
for (int x : numbers) {
if (x < 0) {
continue;
} else {
System.out.println("found positive number");
}
}While this is right:
for (int x : numbers) {
if (x < 0) {
continue;
}
System.out.println("found positive number");
}There is no road after the dead end! If you draw it, your code looks like this very funny snippet I found a few years ago reviewing sources written by some very well-paid developer in one very serious company:
if (x < 0) {
throw new Exception("X is negative");
System.exit(1);
}Don’t do this.
Please, use syntax highlighting in your comments, to make them more readable.

The scope of a variable is the place where it is visible, like a method, for example. Look at this Ruby class:
class CSV
def initialize(csvFileName)
@fileName = csvFileName
end
def readRecords()
File.readLines(@fileName).map |csvLine|
csvLine.split(',')
end
end
endThe visible scope of variable csvFileName is method initialize(), which is a constructor of the class CSV. Why does it need a compound name that consists of three words? Isn’t it already clear that a single-argument constructor of class CSV expects the name of a file with comma-separated values? I would rename it to file.
Next, the scope of @fileName is the entire CSV class. Renaming a single variable in the class to just @file won’t introduce any confusion. It’s still clear what file we’re dealing with. The same situation exists with the csvLine variable. It is clear that we’re dealing with CSV lines here. The csv prefix is just a redundancy. Here is how I would refactor the class:
class CSV
def initialize(file)
@file = file
end
def records()
File.readLines(@file).map |line|
line.split(',')
end
end
endNow it looks clear and concise.
If you can’t perform such a refactoring, it means your scope is too big and/or too complex. An ideal method should deal with up to five variables, and an ideal class should encapsulate up to five properties.
If we have five variables, can’t we find five nouns to name them?
Adam and Eve didn’t have second names. They were unique in Eden, as were many other characters in the Old Testament. Second and middle names were invented later in order to resolve ambiguity. To keep your methods and classes clean and solid, and to prevent ambiguity, try to give your variables and methods unique single-word names, just like Adam and Eve were named by you know who :)
PS. Also, redundant variables are evil as well.
" />
The scope of a variable is the place where it is visible, like a method, for example. Look at this Ruby class:
class CSV
def initialize(csvFileName)
@fileName = csvFileName
end
def readRecords()
File.readLines(@fileName).map |csvLine|
csvLine.split(',')
end
end
endThe visible scope of variable csvFileName is method initialize(), which is a constructor of the class CSV. Why does it need a compound name that consists of three words? Isn’t it already clear that a single-argument constructor of class CSV expects the name of a file with comma-separated values? I would rename it to file.
Next, the scope of @fileName is the entire CSV class. Renaming a single variable in the class to just @file won’t introduce any confusion. It’s still clear what file we’re dealing with. The same situation exists with the csvLine variable. It is clear that we’re dealing with CSV lines here. The csv prefix is just a redundancy. Here is how I would refactor the class:
class CSV
def initialize(file)
@file = file
end
def records()
File.readLines(@file).map |line|
line.split(',')
end
end
endNow it looks clear and concise.
If you can’t perform such a refactoring, it means your scope is too big and/or too complex. An ideal method should deal with up to five variables, and an ideal class should encapsulate up to five properties.
If we have five variables, can’t we find five nouns to name them?
Adam and Eve didn’t have second names. They were unique in Eden, as were many other characters in the Old Testament. Second and middle names were invented later in order to resolve ambiguity. To keep your methods and classes clean and solid, and to prevent ambiguity, try to give your variables and methods unique single-word names, just like Adam and Eve were named by you know who :)
PS. Also, redundant variables are evil as well.
"/>
https://www.yegor256.com/2015/01/12/compound-name-is-code-smell.html
A Compound Name Is a Code Smell
- Yegor Bugayenko
- comments
- Translated:
- Chinese
- add yours!
Do you name variables like textLength, table_name, or current-user-email? All three are compound names that consist of more than one word. Even though they look more descriptive than name, length, or email, I would strongly recommend avoiding them. I believe a variable name that is more complex than a noun is a code smell. Why? Because we usually give a variable a compound name when its scope is so big and complex that a simple noun would sound ambiguous. And a big, complex scope is an obvious code smell.

The scope of a variable is the place where it is visible, like a method, for example. Look at this Ruby class:
class CSV
def initialize(csvFileName)
@fileName = csvFileName
end
def readRecords()
File.readLines(@fileName).map |csvLine|
csvLine.split(',')
end
end
endThe visible scope of variable csvFileName is method initialize(), which is a constructor of the class CSV. Why does it need a compound name that consists of three words? Isn’t it already clear that a single-argument constructor of class CSV expects the name of a file with comma-separated values? I would rename it to file.
Next, the scope of @fileName is the entire CSV class. Renaming a single variable in the class to just @file won’t introduce any confusion. It’s still clear what file we’re dealing with. The same situation exists with the csvLine variable. It is clear that we’re dealing with CSV lines here. The csv prefix is just a redundancy. Here is how I would refactor the class:
class CSV
def initialize(file)
@file = file
end
def records()
File.readLines(@file).map |line|
line.split(',')
end
end
endNow it looks clear and concise.
If you can’t perform such a refactoring, it means your scope is too big and/or too complex. An ideal method should deal with up to five variables, and an ideal class should encapsulate up to five properties.
If we have five variables, can’t we find five nouns to name them?
Adam and Eve didn’t have second names. They were unique in Eden, as were many other characters in the Old Testament. Second and middle names were invented later in order to resolve ambiguity. To keep your methods and classes clean and solid, and to prevent ambiguity, try to give your variables and methods unique single-word names, just like Adam and Eve were named by you know who :)
PS. Also, redundant variables are evil as well.
Do you name variables like textLength, table_name, or current-user-email? All three are compound names that consist of more than one word. Even though they look more descriptive than name, length, or email, I would strongly recommend avoiding them. I believe a variable name that is more complex than a noun is a code smell. Why? Because we usually give a variable a compound name when its scope is so big and complex that a simple noun would sound ambiguous. And a big, complex scope is an obvious code smell.

The scope of a variable is the place where it is visible, like a method, for example. Look at this Ruby class:
class CSV
def initialize(csvFileName)
@fileName = csvFileName
end
def readRecords()
File.readLines(@fileName).map |csvLine|
csvLine.split(',')
end
end
endThe visible scope of variable csvFileName is method initialize(), which is a constructor of the class CSV. Why does it need a compound name that consists of three words? Isn’t it already clear that a single-argument constructor of class CSV expects the name of a file with comma-separated values? I would rename it to file.
Next, the scope of @fileName is the entire CSV class. Renaming a single variable in the class to just @file won’t introduce any confusion. It’s still clear what file we’re dealing with. The same situation exists with the csvLine variable. It is clear that we’re dealing with CSV lines here. The csv prefix is just a redundancy. Here is how I would refactor the class:
class CSV
def initialize(file)
@file = file
end
def records()
File.readLines(@file).map |line|
line.split(',')
end
end
endNow it looks clear and concise.
If you can’t perform such a refactoring, it means your scope is too big and/or too complex. An ideal method should deal with up to five variables, and an ideal class should encapsulate up to five properties.
If we have five variables, can’t we find five nouns to name them?
Adam and Eve didn’t have second names. They were unique in Eden, as were many other characters in the Old Testament. Second and middle names were invented later in order to resolve ambiguity. To keep your methods and classes clean and solid, and to prevent ambiguity, try to give your variables and methods unique single-word names, just like Adam and Eve were named by you know who :)
PS. Also, redundant variables are evil as well.
Please, use syntax highlighting in your comments, to make them more readable.
String, BigInteger, Locale, URI, URL, Inet4Address, UUID, or wrapper classes for primitives, like Double and Integer. Other comments argued against the very definition of an immutable object as a representative of a mutable real-world entity. How could an immutable object represent a mutable entity? Huh?
I’m very surprised. This post is going to clarify the definition of an immutable object. First, here is a quick answer. How can an immutable object represent a mutable entity? Look at an immutable class, File, and its methods, for example length() and delete(). The class is immutable, according to Oracle documentation, and its methods may return different values each time we call them. An object of class File, being perfectly immutable, represents a mutable real-world entity, a file on disk.

In this post, I said that “an object is immutable if its state can’t be modified after it is created.” This definition is not mine; it’s taken from Java Concurrency in Practice by Goetz et al., Section 3.4 (by the way, I highly recommend you read it). Now look at this class (I’m using jcabi-http to read and write over HTTP):
@Immutable
class Page {
private final URI uri;
Page(URI addr) {
this.uri = addr;
}
public String load() {
return new JdkRequest(this.uri)
.fetch().body();
}
public void save(String content) {
new JdkRequest(this.uri)
.method("PUT")
.body().set(content).back()
.fetch();
}
}What is the “state” in this class? That’s right, this.uri is the state. It uniquely identifies every object of this class, and it is not modifiable. Thus, the class makes only immutable objects. And each object represents a mutable entity of the real world, a web page with a URI.
There is no contradiction in this situation. The class is perfectly immutable, while the web page it represents is mutable.
Why do most programmers I have talked to believe that if an underlying entity is mutable, an object is mutable too? I think the answer is simple—they think that objects are data structures with methods. That’s why, from this point of view, an immutable object is a data structure that never changes.
This is where the fallacy is coming from—an object is not a data structure. It is a living organism representing a real-world entity inside the object’s living environment (a computer program). It does encapsulate some data, which helps to locate the entity in the real world. The encapsulated data is the coordinates of the entity being represented. In the case of String or URL, the coordinates are the same as the entity itself, but this is just an isolated incident, not a generic rule.
An immutable object is not a data structure that doesn’t change, even though String, BigInteger, and URL look like one. An object is immutable if and only if it doesn’t change the coordinates of the real-world entity it represents. In the Page class above, this means that an object of the class, once instantiated, will never change this.uri. It will always point to the same web page, no matter what.
And the object doesn’t guarantee anything about the behavior of that web page. The page is a dynamic creature of a real world, living its own life. Our object can’t promise anything about the page. The only thing it promises is that it will always stay loyal to that page—it will never forget or change its coordinates.
Conceptually speaking, immutability means loyalty, that’s all.
" /> Objects Should Be Immutable and How an Immutable Object Can Have State and Behavior?, I was surprised by the number of comments saying that I badly misunderstood the idea. Most of those comments stated that an immutable object must always behave the same way—that is what immutability is about. What kind of immutability is it, if a method returns different results each time we call it? This is not how well-known immutable classes behave. Take, for example,String, BigInteger, Locale, URI, URL, Inet4Address, UUID, or wrapper classes for primitives, like Double and Integer. Other comments argued against the very definition of an immutable object as a representative of a mutable real-world entity. How could an immutable object represent a mutable entity? Huh?
I’m very surprised. This post is going to clarify the definition of an immutable object. First, here is a quick answer. How can an immutable object represent a mutable entity? Look at an immutable class, File, and its methods, for example length() and delete(). The class is immutable, according to Oracle documentation, and its methods may return different values each time we call them. An object of class File, being perfectly immutable, represents a mutable real-world entity, a file on disk.

In this post, I said that “an object is immutable if its state can’t be modified after it is created.” This definition is not mine; it’s taken from Java Concurrency in Practice by Goetz et al., Section 3.4 (by the way, I highly recommend you read it). Now look at this class (I’m using jcabi-http to read and write over HTTP):
@Immutable
class Page {
private final URI uri;
Page(URI addr) {
this.uri = addr;
}
public String load() {
return new JdkRequest(this.uri)
.fetch().body();
}
public void save(String content) {
new JdkRequest(this.uri)
.method("PUT")
.body().set(content).back()
.fetch();
}
}What is the “state” in this class? That’s right, this.uri is the state. It uniquely identifies every object of this class, and it is not modifiable. Thus, the class makes only immutable objects. And each object represents a mutable entity of the real world, a web page with a URI.
There is no contradiction in this situation. The class is perfectly immutable, while the web page it represents is mutable.
Why do most programmers I have talked to believe that if an underlying entity is mutable, an object is mutable too? I think the answer is simple—they think that objects are data structures with methods. That’s why, from this point of view, an immutable object is a data structure that never changes.
This is where the fallacy is coming from—an object is not a data structure. It is a living organism representing a real-world entity inside the object’s living environment (a computer program). It does encapsulate some data, which helps to locate the entity in the real world. The encapsulated data is the coordinates of the entity being represented. In the case of String or URL, the coordinates are the same as the entity itself, but this is just an isolated incident, not a generic rule.
An immutable object is not a data structure that doesn’t change, even though String, BigInteger, and URL look like one. An object is immutable if and only if it doesn’t change the coordinates of the real-world entity it represents. In the Page class above, this means that an object of the class, once instantiated, will never change this.uri. It will always point to the same web page, no matter what.
And the object doesn’t guarantee anything about the behavior of that web page. The page is a dynamic creature of a real world, living its own life. Our object can’t promise anything about the page. The only thing it promises is that it will always stay loyal to that page—it will never forget or change its coordinates.
Conceptually speaking, immutability means loyalty, that’s all.
"/>
https://www.yegor256.com/2014/12/22/immutable-objects-not-dumb.html
Immutable Objects Are Not Dumb
- Yegor Bugayenko
- comments
After a few recent posts about immutability, including Objects Should Be Immutable and How an Immutable Object Can Have State and Behavior?, I was surprised by the number of comments saying that I badly misunderstood the idea. Most of those comments stated that an immutable object must always behave the same way—that is what immutability is about. What kind of immutability is it, if a method returns different results each time we call it? This is not how well-known immutable classes behave. Take, for example, String, BigInteger, Locale, URI, URL, Inet4Address, UUID, or wrapper classes for primitives, like Double and Integer. Other comments argued against the very definition of an immutable object as a representative of a mutable real-world entity. How could an immutable object represent a mutable entity? Huh?

I’m very surprised. This post is going to clarify the definition of an immutable object. First, here is a quick answer. How can an immutable object represent a mutable entity? Look at an immutable class, File, and its methods, for example length() and delete(). The class is immutable, according to Oracle documentation, and its methods may return different values each time we call them. An object of class File, being perfectly immutable, represents a mutable real-world entity, a file on disk.

In this post, I said that “an object is immutable if its state can’t be modified after it is created.” This definition is not mine; it’s taken from Java Concurrency in Practice by Goetz et al., Section 3.4 (by the way, I highly recommend you read it). Now look at this class (I’m using jcabi-http to read and write over HTTP):
@Immutable
class Page {
private final URI uri;
Page(URI addr) {
this.uri = addr;
}
public String load() {
return new JdkRequest(this.uri)
.fetch().body();
}
public void save(String content) {
new JdkRequest(this.uri)
.method("PUT")
.body().set(content).back()
.fetch();
}
}What is the “state” in this class? That’s right, this.uri is the state. It uniquely identifies every object of this class, and it is not modifiable. Thus, the class makes only immutable objects. And each object represents a mutable entity of the real world, a web page with a URI.
There is no contradiction in this situation. The class is perfectly immutable, while the web page it represents is mutable.
Why do most programmers I have talked to believe that if an underlying entity is mutable, an object is mutable too? I think the answer is simple—they think that objects are data structures with methods. That’s why, from this point of view, an immutable object is a data structure that never changes.
This is where the fallacy is coming from—an object is not a data structure. It is a living organism representing a real-world entity inside the object’s living environment (a computer program). It does encapsulate some data, which helps to locate the entity in the real world. The encapsulated data is the coordinates of the entity being represented. In the case of String or URL, the coordinates are the same as the entity itself, but this is just an isolated incident, not a generic rule.
An immutable object is not a data structure that doesn’t change, even though String, BigInteger, and URL look like one. An object is immutable if and only if it doesn’t change the coordinates of the real-world entity it represents. In the Page class above, this means that an object of the class, once instantiated, will never change this.uri. It will always point to the same web page, no matter what.
And the object doesn’t guarantee anything about the behavior of that web page. The page is a dynamic creature of a real world, living its own life. Our object can’t promise anything about the page. The only thing it promises is that it will always stay loyal to that page—it will never forget or change its coordinates.
Conceptually speaking, immutability means loyalty, that’s all.
After a few recent posts about immutability, including Objects Should Be Immutable and How an Immutable Object Can Have State and Behavior?, I was surprised by the number of comments saying that I badly misunderstood the idea. Most of those comments stated that an immutable object must always behave the same way—that is what immutability is about. What kind of immutability is it, if a method returns different results each time we call it? This is not how well-known immutable classes behave. Take, for example, String, BigInteger, Locale, URI, URL, Inet4Address, UUID, or wrapper classes for primitives, like Double and Integer. Other comments argued against the very definition of an immutable object as a representative of a mutable real-world entity. How could an immutable object represent a mutable entity? Huh?

I’m very surprised. This post is going to clarify the definition of an immutable object. First, here is a quick answer. How can an immutable object represent a mutable entity? Look at an immutable class, File, and its methods, for example length() and delete(). The class is immutable, according to Oracle documentation, and its methods may return different values each time we call them. An object of class File, being perfectly immutable, represents a mutable real-world entity, a file on disk.

In this post, I said that “an object is immutable if its state can’t be modified after it is created.” This definition is not mine; it’s taken from Java Concurrency in Practice by Goetz et al., Section 3.4 (by the way, I highly recommend you read it). Now look at this class (I’m using jcabi-http to read and write over HTTP):
@Immutable
class Page {
private final URI uri;
Page(URI addr) {
this.uri = addr;
}
public String load() {
return new JdkRequest(this.uri)
.fetch().body();
}
public void save(String content) {
new JdkRequest(this.uri)
.method("PUT")
.body().set(content).back()
.fetch();
}
}What is the “state” in this class? That’s right, this.uri is the state. It uniquely identifies every object of this class, and it is not modifiable. Thus, the class makes only immutable objects. And each object represents a mutable entity of the real world, a web page with a URI.
There is no contradiction in this situation. The class is perfectly immutable, while the web page it represents is mutable.
Why do most programmers I have talked to believe that if an underlying entity is mutable, an object is mutable too? I think the answer is simple—they think that objects are data structures with methods. That’s why, from this point of view, an immutable object is a data structure that never changes.
This is where the fallacy is coming from—an object is not a data structure. It is a living organism representing a real-world entity inside the object’s living environment (a computer program). It does encapsulate some data, which helps to locate the entity in the real world. The encapsulated data is the coordinates of the entity being represented. In the case of String or URL, the coordinates are the same as the entity itself, but this is just an isolated incident, not a generic rule.
An immutable object is not a data structure that doesn’t change, even though String, BigInteger, and URL look like one. An object is immutable if and only if it doesn’t change the coordinates of the real-world entity it represents. In the Page class above, this means that an object of the class, once instantiated, will never change this.uri. It will always point to the same web page, no matter what.
And the object doesn’t guarantee anything about the behavior of that web page. The page is a dynamic creature of a real world, living its own life. Our object can’t promise anything about the page. The only thing it promises is that it will always stay loyal to that page—it will never forget or change its coordinates.
Conceptually speaking, immutability means loyalty, that’s all.
If you like this article, you will definitely like these very relevant posts too:
Objects Should Be Immutable
The article gives arguments about why classes/objects in object-oriented programming have to be immutable, i.e. never modify their encapsulated state
How an Immutable Object Can Have State and Behavior?
Object state and behavior are two very different things, and confusing the two often leads to incorrect design.
Gradients of Immutability
There are a few levels and forms of immutability in object-oriented programming, all of which can be used when they seem appropriate.
Please, use syntax highlighting in your comments, to make them more readable.
new HTTP("http://www.google.com").read();
new HTTP().read("http://www.google.com");What is the difference? The first class HTTP encapsulates a URL, while the second one expects it as an argument of method read(). Technically, both objects do exactly the same thing: they read the content of the Google home page. Which one is the right design? Usually I hate to say this, but in this case I have to—it depends.

As we discussed before, a good object is a representative of a real-life entity. Such an entity exists outside of the object’s living environment. The object knows how to access it and how to communicate with it.
What is that real-life entity in the example above? Each class gives its own answer. And the answer is given by the list of arguments its constructors accept. The first class accepts a single URL as an argument of its constructor. This tells us that the object of this class, after being constructed, will represent a web page. The second class accepts no arguments, which tells us that the object of it will represent… the Universe.
I think this principle is applicable to all classes in object-oriented programming—in order to understand what real-life entity an object represents, look at its constructor. All arguments passed into the constructor and encapsulated by the object identify a real-life entity accessed and managed by the object.
Of course, I’m talking about good objects, which are immutable and don’t have setters and getters.
Pay attention that I’m talking about arguments encapsulated by the object. The following class doesn’t represent the Universe, even though it does have a no-arguments constructor:
class Time {
private final long msec;
public Time() {
this(System.currentTimeMillis());
}
public Time(long time) {
this.msec = time;
}
}This class has two constructors. One of them is the main one, and one is supplementary. We’re interested in the main one, which implements the encapsulation of arguments.
Now, the question is which is better: to represent a web page or the Universe? It depends, but I think that in general, the smaller the real-life entity we represent, the more solid and cohesive design we give to the object.
On the other hand, sometimes we have to have an object that represents the Universe. For example, we may have this:
class HTTP {
public String read(String url) {
// read via HTTP and return
}
public boolean online() {
// check whether we're online
}
}This is not an elegant design, but it demonstrates when it may be necessary to represent the entire Universe. An object of this HTTP class can read any web page from the entire web (it is almost as big as the Universe, isn’t it?), and it can check whether the entire web is accessible by it. Obviously, in this case, we don’t need it to encapsulate anything.
I believe that objects representing the Universe are not good objects, mostly because there is only one Universe; why do we need many representatives of it? :)
" />new HTTP("http://www.google.com").read();
new HTTP().read("http://www.google.com");What is the difference? The first class HTTP encapsulates a URL, while the second one expects it as an argument of method read(). Technically, both objects do exactly the same thing: they read the content of the Google home page. Which one is the right design? Usually I hate to say this, but in this case I have to—it depends.

As we discussed before, a good object is a representative of a real-life entity. Such an entity exists outside of the object’s living environment. The object knows how to access it and how to communicate with it.
What is that real-life entity in the example above? Each class gives its own answer. And the answer is given by the list of arguments its constructors accept. The first class accepts a single URL as an argument of its constructor. This tells us that the object of this class, after being constructed, will represent a web page. The second class accepts no arguments, which tells us that the object of it will represent… the Universe.
I think this principle is applicable to all classes in object-oriented programming—in order to understand what real-life entity an object represents, look at its constructor. All arguments passed into the constructor and encapsulated by the object identify a real-life entity accessed and managed by the object.
Of course, I’m talking about good objects, which are immutable and don’t have setters and getters.
Pay attention that I’m talking about arguments encapsulated by the object. The following class doesn’t represent the Universe, even though it does have a no-arguments constructor:
class Time {
private final long msec;
public Time() {
this(System.currentTimeMillis());
}
public Time(long time) {
this.msec = time;
}
}This class has two constructors. One of them is the main one, and one is supplementary. We’re interested in the main one, which implements the encapsulation of arguments.
Now, the question is which is better: to represent a web page or the Universe? It depends, but I think that in general, the smaller the real-life entity we represent, the more solid and cohesive design we give to the object.
On the other hand, sometimes we have to have an object that represents the Universe. For example, we may have this:
class HTTP {
public String read(String url) {
// read via HTTP and return
}
public boolean online() {
// check whether we're online
}
}This is not an elegant design, but it demonstrates when it may be necessary to represent the entire Universe. An object of this HTTP class can read any web page from the entire web (it is almost as big as the Universe, isn’t it?), and it can check whether the entire web is accessible by it. Obviously, in this case, we don’t need it to encapsulate anything.
I believe that objects representing the Universe are not good objects, mostly because there is only one Universe; why do we need many representatives of it? :)
"/>
https://www.yegor256.com/2014/12/15/how-much-your-objects-encapsulate.html
How Much Your Objects Encapsulate?
- Yegor Bugayenko
- comments
Which line do you like more, the first or the second:
new HTTP("http://www.google.com").read();
new HTTP().read("http://www.google.com");What is the difference? The first class HTTP encapsulates a URL, while the second one expects it as an argument of method read(). Technically, both objects do exactly the same thing: they read the content of the Google home page. Which one is the right design? Usually I hate to say this, but in this case I have to—it depends.

As we discussed before, a good object is a representative of a real-life entity. Such an entity exists outside of the object’s living environment. The object knows how to access it and how to communicate with it.
What is that real-life entity in the example above? Each class gives its own answer. And the answer is given by the list of arguments its constructors accept. The first class accepts a single URL as an argument of its constructor. This tells us that the object of this class, after being constructed, will represent a web page. The second class accepts no arguments, which tells us that the object of it will represent… the Universe.
I think this principle is applicable to all classes in object-oriented programming—in order to understand what real-life entity an object represents, look at its constructor. All arguments passed into the constructor and encapsulated by the object identify a real-life entity accessed and managed by the object.
Of course, I’m talking about good objects, which are immutable and don’t have setters and getters.
Pay attention that I’m talking about arguments encapsulated by the object. The following class doesn’t represent the Universe, even though it does have a no-arguments constructor:
class Time {
private final long msec;
public Time() {
this(System.currentTimeMillis());
}
public Time(long time) {
this.msec = time;
}
}This class has two constructors. One of them is the main one, and one is supplementary. We’re interested in the main one, which implements the encapsulation of arguments.
Now, the question is which is better: to represent a web page or the Universe? It depends, but I think that in general, the smaller the real-life entity we represent, the more solid and cohesive design we give to the object.
On the other hand, sometimes we have to have an object that represents the Universe. For example, we may have this:
class HTTP {
public String read(String url) {
// read via HTTP and return
}
public boolean online() {
// check whether we're online
}
}This is not an elegant design, but it demonstrates when it may be necessary to represent the entire Universe. An object of this HTTP class can read any web page from the entire web (it is almost as big as the Universe, isn’t it?), and it can check whether the entire web is accessible by it. Obviously, in this case, we don’t need it to encapsulate anything.
I believe that objects representing the Universe are not good objects, mostly because there is only one Universe; why do we need many representatives of it? :)
Which line do you like more, the first or the second:
new HTTP("http://www.google.com").read();
new HTTP().read("http://www.google.com");What is the difference? The first class HTTP encapsulates a URL, while the second one expects it as an argument of method read(). Technically, both objects do exactly the same thing: they read the content of the Google home page. Which one is the right design? Usually I hate to say this, but in this case I have to—it depends.

As we discussed before, a good object is a representative of a real-life entity. Such an entity exists outside of the object’s living environment. The object knows how to access it and how to communicate with it.
What is that real-life entity in the example above? Each class gives its own answer. And the answer is given by the list of arguments its constructors accept. The first class accepts a single URL as an argument of its constructor. This tells us that the object of this class, after being constructed, will represent a web page. The second class accepts no arguments, which tells us that the object of it will represent… the Universe.
I think this principle is applicable to all classes in object-oriented programming—in order to understand what real-life entity an object represents, look at its constructor. All arguments passed into the constructor and encapsulated by the object identify a real-life entity accessed and managed by the object.
Of course, I’m talking about good objects, which are immutable and don’t have setters and getters.
Pay attention that I’m talking about arguments encapsulated by the object. The following class doesn’t represent the Universe, even though it does have a no-arguments constructor:
class Time {
private final long msec;
public Time() {
this(System.currentTimeMillis());
}
public Time(long time) {
this.msec = time;
}
}This class has two constructors. One of them is the main one, and one is supplementary. We’re interested in the main one, which implements the encapsulation of arguments.
Now, the question is which is better: to represent a web page or the Universe? It depends, but I think that in general, the smaller the real-life entity we represent, the more solid and cohesive design we give to the object.
On the other hand, sometimes we have to have an object that represents the Universe. For example, we may have this:
class HTTP {
public String read(String url) {
// read via HTTP and return
}
public boolean online() {
// check whether we're online
}
}This is not an elegant design, but it demonstrates when it may be necessary to represent the entire Universe. An object of this HTTP class can read any web page from the entire web (it is almost as big as the Universe, isn’t it?), and it can check whether the entire web is accessible by it. Obviously, in this case, we don’t need it to encapsulate anything.
I believe that objects representing the Universe are not good objects, mostly because there is only one Universe; why do we need many representatives of it? :)
Please, use syntax highlighting in your comments, to make them more readable.
document every time we just need to change its title.” Here is where I disagree: object title is not a state of a document, if you need to change it frequently. Instead, it is a document’s behavior. A document can and must be immutable, if it is a good object, even when its title is changed frequently. Let me explain how.
Identity, State, and Behavior
Basically, there are three elements in every object: identity, state, and behavior. Identity is what distinguishes our document from other objects, state is what a document knows about itself (a.k.a. “encapsulated knowledge”), and behavior is what a document can do for us on request. For example, this is a mutable document:
class Document {
private int id;
private String title;
Document(int id) {
this.id = id;
}
public String getTitle() {
return this.title;
}
public String setTitle(String text) {
this.title = text;
}
@Override
public String toString() {
return String.format("doc #%d about '%s'", this.id, this.text);
}
}Let’s try to use this mutable object:
Document first = new Document(50);
first.setTitle("How to grill a sandwich");
Document second = new Document(50);
second.setTitle("How to grill a sandwich");
if (first.equals(second)) { // FALSE
System.out.println(
String.format("%s is equal to %s", first, second)
);
}Here, we’re creating two objects and then modifying their encapsulated states. Obviously, first.equals(second) will return false because the two objects have different identities, even though they encapsulate the same state.
Method toString() exposes the document’s behavior—the document can convert itself to a string.
In order to modify a document’s title, we just call its setTitle() once again:
first.setTitle("How to cook pasta");Simply put, we can reuse the object many times, modifying its internal state. It is fast and convenient, isn’t it? Fast, yes. Convenient, not really. Read on.
Immutable Objects Have No Identity
As I’ve mentioned before, immutability is one of the virtues of a good object, and a very important one. A good object is immutable, and good software contains only immutable objects. The main difference between immutable and mutable objects is that an immutable one doesn’t have an identity and its state never changes. Here is an immutable variant of the same document:
@Immutable
class Document {
private final int id;
private final String title;
Document(int id, String text) {
this.id = id;
this.title = text;
}
public String title() {
return this.title;
}
public Document title(String text) {
return new Document(this.id, text);
}
@Override
public boolean equals(Object doc) {
return doc instanceof Document
&& Document.class.cast(doc).id == this.id
&& Document.class.cast(doc).title.equals(this.title);
}
@Override
public String toString() {
return String.format(
"doc #%d about '%s'", this.id, this.text
);
}
}This document is immutable, and its state (id ad title) is its identity. Let’s see how we can use this immutable class (by the way, I’m using @Immutable annotation from jcabi-aspects):
Document first = new Document(50, "How to grill a sandwich");
Document second = new Document(50, "How to grill a sandwich");
if (first.equals(second)) { // TRUE
System.out.println(
String.format("%s is equal to %s", first, second)
);
}We can’t modify a document any more. When we need to change the title, we have to create a new document:
Document first = new Document(50, "How to grill a sandwich");
first = first.title("How to cook pasta");Every time we want to modify its encapsulated state, we have to modify its identity too, because there is no identity. State is the identity. Look at the code of the equals() method above—it compares documents by their IDs and titles. Now ID+title of a document is its identity!
What About Frequent Changes?
Now I’m getting to the question we started with: What about performance and convenience? We don’t want to change the entire document every time we have to modify its title. If the document is big enough, that would be a huge obligation. Moreover, if an immutable object encapsulates other immutable objects, we have to change the entire hierarchy when modifying even a single string in one of them.
The answer is simple. A document’s title should not be part of its state. Instead, the title should be its behavior. For example, consider this:
@Immutable
class Document {
private final int id;
Document(int id) {
this.id = id;
}
public String title() {
// read title from storage
}
public void title(String text) {
// save text to storage
}
@Override
public boolean equals(Object doc) {
return doc instanceof Document
&& Document.class.cast(doc).id == this.id;
}
@Override
public String toString() {
return String.format("doc #%d about '%s'", this.id, this.title());
}
}Conceptually speaking, this document is acting as a proxy of a real-life document that has a title stored somewhere—in a file, for example. This is what a good object should do—be a proxy of a real-life entity. The document exposes two features: reading the title and saving the title. Here is how its interface would look like:
@Immutable
interface Document {
String title();
void title(String text);
}title() reads the title of the document and returns it as a String, and title(String) saves it back into the document. Imagine a real paper document with a title. You ask an object to read that title from the paper or to erase an existing one and write new text over it. This paper is a “copy” utilized in these methods.
Now we can make frequent changes to the immutable document, and the document stays the same. It doesn’t stop being immutable, since it’s state (id) is not changed. It is the same document, even though we change its title, because the title is not a state of the document. It is something in the real world, outside of the document. The document is just a proxy between us and that “something.” Reading and writing the title are behaviors of the document, not its state.
Mutable Memory
The only question we still have unanswered is what is that “copy” and what happens if we need to keep the title of the document in memory?
Let’s look at it from an “object thinking” point of view. We have a document object, which is supposed to represent a real-life entity in an object-oriented world. If such an entity is a file, we can easily implement title() methods. If such an entity is an Amazon S3 object, we also implement title reading and writing methods easily, keeping the object immutable. If such an entity is an HTTP page, we have no issues in the implementation of title reading or writing, keeping the object immutable. We have no issues as long as a real-world document exists and has its own identity. Our title reading and writing methods will communicate with that real-world document and extract or update its title.
Problems arise when such an entity doesn’t exist in a real world. In that case, we need to create a mutable object property called title, read it via title(), and modify it via title(String). But an object is immutable, so we can’t have a mutable property in it—by definition! What do we do?
Think.
How could it be that our object doesn’t represent a real-world entity? Remember, the real world is everything around the living environment of an object. Is it possible that an object doesn’t represent anyone and acts on its own? No, it’s not possible. Every object is a representative of a real-world entity. So, who does it represent if we want to keep title inside it and we don’t have any file or HTTP page behind the object?

It represents computer memory.
The title of immutable document #50, “How to grill a sandwich,” is stored in the memory, taking up 23 bytes of space. The document should know where those bytes are stored, and it should be able to read them and replace them with something else. Those 23 bytes are the real-world entity that the object represents. The bytes have nothing to do with the state of the object. They are a mutable real-world entity, similar to a file, HTTP page, or an Amazon S3 object.
Unfortunately, Java (and many other modern languages) do not allow direct access to computer memory. This is how we would design our class if such direct access was possible:
@Immutable
class Document {
private final int id;
private final Memory memory;
Document(int id) {
this.id = id;
this.memory = new Memory();
}
public String title() {
return new String(this.memory.read());
}
public void title(String text) {
this.memory.write(text.getBytes());
}
}That Memory class would be implemented by JDK natively, and all other classes would be immutable. The class Memory would have direct access to the memory heap and would be responsible for malloc and free operations on the operating system level. Having such a class would allow us to make all Java classes immutable, including StringBuffer, ByteArrayOutputStream, etc.
The Memory class would explicitly emphasize the mission of an object in a software program, which is to be a data animator. An object is not holding data; it is animating it. The data exists somewhere, and it is anemic, static, motionless, stationary, etc. The data is dead while the object is alive. The role of an object is to make a piece of data alive, to animate it but not to become a piece of data. An object needs some knowledge in order to gain access to that dead piece of data. An object may need a database unique key, an HTTP address, a file name, or a memory address in order to find the data and animate it. But an object should never think of itself as data.
What Is the Practical Solution?
Unfortunately, we don’t have such a memory-representing class in Java, Ruby, JavaScript, Python, PHP, and many other high-level languages. It looks like language designers didn’t get the idea of alive objects vs. dead data, which is sad. We’re forced to mix data with object states using the same language constructs: object variables and properties. Maybe someday we’ll have that Memory class in Java and other languages, but until then, we have a few options.
Use C++. In C++ and similar low-level languages, it is possible to access memory directly and deal with in-memory data the same way we deal with in-file or in-HTTP data. In C++, we can create that Memory class and use it exactly the way we explained above.
Use Arrays. In Java, an array is a data structure with a unique property—it can be modified while being declared as final. You can use an array of bytes as a mutable data structure inside an immutable object. It’s a surrogate solution that conceptually resembles the Memory class but is much more primitive.
Avoid In-Memory Data. Try to avoid in-memory data as much as possible. In some domains, it is easy to do; for example, in web apps, file processing, I/O adapters, etc. However, in other domains, it is much easier said than done. For example, in games, data manipulation algorithms, and GUI, most of the objects animate in-memory data mostly because memory is the only resource they have. In that case, without the Memory class, you end up with mutable objects :( There is no workaround.
To summarize, don’t forget that an object is an animator of data. It is using its encapsulated knowledge in order to reach the data. No matter where the data is stored—in a file, in HTTP, or in memory—it is conceptually very different from an object state, even though they may look very similar.
A good object is an immutable animator of mutable data. Even though it is immutable and data is mutable, it is alive and data is dead in the scope of the object’s living environment.
" /> immutable objects: “Yes, they are useful when the state doesn’t change. However, in our case, we deal with frequently changing objects. We simply can’t afford to create a newdocument every time we just need to change its title.” Here is where I disagree: object title is not a state of a document, if you need to change it frequently. Instead, it is a document’s behavior. A document can and must be immutable, if it is a good object, even when its title is changed frequently. Let me explain how.
Identity, State, and Behavior
Basically, there are three elements in every object: identity, state, and behavior. Identity is what distinguishes our document from other objects, state is what a document knows about itself (a.k.a. “encapsulated knowledge”), and behavior is what a document can do for us on request. For example, this is a mutable document:
class Document {
private int id;
private String title;
Document(int id) {
this.id = id;
}
public String getTitle() {
return this.title;
}
public String setTitle(String text) {
this.title = text;
}
@Override
public String toString() {
return String.format("doc #%d about '%s'", this.id, this.text);
}
}Let’s try to use this mutable object:
Document first = new Document(50);
first.setTitle("How to grill a sandwich");
Document second = new Document(50);
second.setTitle("How to grill a sandwich");
if (first.equals(second)) { // FALSE
System.out.println(
String.format("%s is equal to %s", first, second)
);
}Here, we’re creating two objects and then modifying their encapsulated states. Obviously, first.equals(second) will return false because the two objects have different identities, even though they encapsulate the same state.
Method toString() exposes the document’s behavior—the document can convert itself to a string.
In order to modify a document’s title, we just call its setTitle() once again:
first.setTitle("How to cook pasta");Simply put, we can reuse the object many times, modifying its internal state. It is fast and convenient, isn’t it? Fast, yes. Convenient, not really. Read on.
Immutable Objects Have No Identity
As I’ve mentioned before, immutability is one of the virtues of a good object, and a very important one. A good object is immutable, and good software contains only immutable objects. The main difference between immutable and mutable objects is that an immutable one doesn’t have an identity and its state never changes. Here is an immutable variant of the same document:
@Immutable
class Document {
private final int id;
private final String title;
Document(int id, String text) {
this.id = id;
this.title = text;
}
public String title() {
return this.title;
}
public Document title(String text) {
return new Document(this.id, text);
}
@Override
public boolean equals(Object doc) {
return doc instanceof Document
&& Document.class.cast(doc).id == this.id
&& Document.class.cast(doc).title.equals(this.title);
}
@Override
public String toString() {
return String.format(
"doc #%d about '%s'", this.id, this.text
);
}
}This document is immutable, and its state (id ad title) is its identity. Let’s see how we can use this immutable class (by the way, I’m using @Immutable annotation from jcabi-aspects):
Document first = new Document(50, "How to grill a sandwich");
Document second = new Document(50, "How to grill a sandwich");
if (first.equals(second)) { // TRUE
System.out.println(
String.format("%s is equal to %s", first, second)
);
}We can’t modify a document any more. When we need to change the title, we have to create a new document:
Document first = new Document(50, "How to grill a sandwich");
first = first.title("How to cook pasta");Every time we want to modify its encapsulated state, we have to modify its identity too, because there is no identity. State is the identity. Look at the code of the equals() method above—it compares documents by their IDs and titles. Now ID+title of a document is its identity!
What About Frequent Changes?
Now I’m getting to the question we started with: What about performance and convenience? We don’t want to change the entire document every time we have to modify its title. If the document is big enough, that would be a huge obligation. Moreover, if an immutable object encapsulates other immutable objects, we have to change the entire hierarchy when modifying even a single string in one of them.
The answer is simple. A document’s title should not be part of its state. Instead, the title should be its behavior. For example, consider this:
@Immutable
class Document {
private final int id;
Document(int id) {
this.id = id;
}
public String title() {
// read title from storage
}
public void title(String text) {
// save text to storage
}
@Override
public boolean equals(Object doc) {
return doc instanceof Document
&& Document.class.cast(doc).id == this.id;
}
@Override
public String toString() {
return String.format("doc #%d about '%s'", this.id, this.title());
}
}Conceptually speaking, this document is acting as a proxy of a real-life document that has a title stored somewhere—in a file, for example. This is what a good object should do—be a proxy of a real-life entity. The document exposes two features: reading the title and saving the title. Here is how its interface would look like:
@Immutable
interface Document {
String title();
void title(String text);
}title() reads the title of the document and returns it as a String, and title(String) saves it back into the document. Imagine a real paper document with a title. You ask an object to read that title from the paper or to erase an existing one and write new text over it. This paper is a “copy” utilized in these methods.
Now we can make frequent changes to the immutable document, and the document stays the same. It doesn’t stop being immutable, since it’s state (id) is not changed. It is the same document, even though we change its title, because the title is not a state of the document. It is something in the real world, outside of the document. The document is just a proxy between us and that “something.” Reading and writing the title are behaviors of the document, not its state.
Mutable Memory
The only question we still have unanswered is what is that “copy” and what happens if we need to keep the title of the document in memory?
Let’s look at it from an “object thinking” point of view. We have a document object, which is supposed to represent a real-life entity in an object-oriented world. If such an entity is a file, we can easily implement title() methods. If such an entity is an Amazon S3 object, we also implement title reading and writing methods easily, keeping the object immutable. If such an entity is an HTTP page, we have no issues in the implementation of title reading or writing, keeping the object immutable. We have no issues as long as a real-world document exists and has its own identity. Our title reading and writing methods will communicate with that real-world document and extract or update its title.
Problems arise when such an entity doesn’t exist in a real world. In that case, we need to create a mutable object property called title, read it via title(), and modify it via title(String). But an object is immutable, so we can’t have a mutable property in it—by definition! What do we do?
Think.
How could it be that our object doesn’t represent a real-world entity? Remember, the real world is everything around the living environment of an object. Is it possible that an object doesn’t represent anyone and acts on its own? No, it’s not possible. Every object is a representative of a real-world entity. So, who does it represent if we want to keep title inside it and we don’t have any file or HTTP page behind the object?

It represents computer memory.
The title of immutable document #50, “How to grill a sandwich,” is stored in the memory, taking up 23 bytes of space. The document should know where those bytes are stored, and it should be able to read them and replace them with something else. Those 23 bytes are the real-world entity that the object represents. The bytes have nothing to do with the state of the object. They are a mutable real-world entity, similar to a file, HTTP page, or an Amazon S3 object.
Unfortunately, Java (and many other modern languages) do not allow direct access to computer memory. This is how we would design our class if such direct access was possible:
@Immutable
class Document {
private final int id;
private final Memory memory;
Document(int id) {
this.id = id;
this.memory = new Memory();
}
public String title() {
return new String(this.memory.read());
}
public void title(String text) {
this.memory.write(text.getBytes());
}
}That Memory class would be implemented by JDK natively, and all other classes would be immutable. The class Memory would have direct access to the memory heap and would be responsible for malloc and free operations on the operating system level. Having such a class would allow us to make all Java classes immutable, including StringBuffer, ByteArrayOutputStream, etc.
The Memory class would explicitly emphasize the mission of an object in a software program, which is to be a data animator. An object is not holding data; it is animating it. The data exists somewhere, and it is anemic, static, motionless, stationary, etc. The data is dead while the object is alive. The role of an object is to make a piece of data alive, to animate it but not to become a piece of data. An object needs some knowledge in order to gain access to that dead piece of data. An object may need a database unique key, an HTTP address, a file name, or a memory address in order to find the data and animate it. But an object should never think of itself as data.
What Is the Practical Solution?
Unfortunately, we don’t have such a memory-representing class in Java, Ruby, JavaScript, Python, PHP, and many other high-level languages. It looks like language designers didn’t get the idea of alive objects vs. dead data, which is sad. We’re forced to mix data with object states using the same language constructs: object variables and properties. Maybe someday we’ll have that Memory class in Java and other languages, but until then, we have a few options.
Use C++. In C++ and similar low-level languages, it is possible to access memory directly and deal with in-memory data the same way we deal with in-file or in-HTTP data. In C++, we can create that Memory class and use it exactly the way we explained above.
Use Arrays. In Java, an array is a data structure with a unique property—it can be modified while being declared as final. You can use an array of bytes as a mutable data structure inside an immutable object. It’s a surrogate solution that conceptually resembles the Memory class but is much more primitive.
Avoid In-Memory Data. Try to avoid in-memory data as much as possible. In some domains, it is easy to do; for example, in web apps, file processing, I/O adapters, etc. However, in other domains, it is much easier said than done. For example, in games, data manipulation algorithms, and GUI, most of the objects animate in-memory data mostly because memory is the only resource they have. In that case, without the Memory class, you end up with mutable objects :( There is no workaround.
To summarize, don’t forget that an object is an animator of data. It is using its encapsulated knowledge in order to reach the data. No matter where the data is stored—in a file, in HTTP, or in memory—it is conceptually very different from an object state, even though they may look very similar.
A good object is an immutable animator of mutable data. Even though it is immutable and data is mutable, it is alive and data is dead in the scope of the object’s living environment.
"/>
https://www.yegor256.com/2014/12/09/immutable-object-state-and-behavior.html
How an Immutable Object Can Have State and Behavior?
- Yegor Bugayenko
- comments
- Discussed at:
I often hear this argument against immutable objects: “Yes, they are useful when the state doesn’t change. However, in our case, we deal with frequently changing objects. We simply can’t afford to create a new document every time we just need to change its title.” Here is where I disagree: object title is not a state of a document, if you need to change it frequently. Instead, it is a document’s behavior. A document can and must be immutable, if it is a good object, even when its title is changed frequently. Let me explain how.

Identity, State, and Behavior
Basically, there are three elements in every object: identity, state, and behavior. Identity is what distinguishes our document from other objects, state is what a document knows about itself (a.k.a. “encapsulated knowledge”), and behavior is what a document can do for us on request. For example, this is a mutable document:
class Document {
private int id;
private String title;
Document(int id) {
this.id = id;
}
public String getTitle() {
return this.title;
}
public String setTitle(String text) {
this.title = text;
}
@Override
public String toString() {
return String.format("doc #%d about '%s'", this.id, this.text);
}
}Let’s try to use this mutable object:
Document first = new Document(50);
first.setTitle("How to grill a sandwich");
Document second = new Document(50);
second.setTitle("How to grill a sandwich");
if (first.equals(second)) { // FALSE
System.out.println(
String.format("%s is equal to %s", first, second)
);
}Here, we’re creating two objects and then modifying their encapsulated states. Obviously, first.equals(second) will return false because the two objects have different identities, even though they encapsulate the same state.
Method toString() exposes the document’s behavior—the document can convert itself to a string.
In order to modify a document’s title, we just call its setTitle() once again:
first.setTitle("How to cook pasta");Simply put, we can reuse the object many times, modifying its internal state. It is fast and convenient, isn’t it? Fast, yes. Convenient, not really. Read on.
Immutable Objects Have No Identity
As I’ve mentioned before, immutability is one of the virtues of a good object, and a very important one. A good object is immutable, and good software contains only immutable objects. The main difference between immutable and mutable objects is that an immutable one doesn’t have an identity and its state never changes. Here is an immutable variant of the same document:
@Immutable
class Document {
private final int id;
private final String title;
Document(int id, String text) {
this.id = id;
this.title = text;
}
public String title() {
return this.title;
}
public Document title(String text) {
return new Document(this.id, text);
}
@Override
public boolean equals(Object doc) {
return doc instanceof Document
&& Document.class.cast(doc).id == this.id
&& Document.class.cast(doc).title.equals(this.title);
}
@Override
public String toString() {
return String.format(
"doc #%d about '%s'", this.id, this.text
);
}
}This document is immutable, and its state (id ad title) is its identity. Let’s see how we can use this immutable class (by the way, I’m using @Immutable annotation from jcabi-aspects):
Document first = new Document(50, "How to grill a sandwich");
Document second = new Document(50, "How to grill a sandwich");
if (first.equals(second)) { // TRUE
System.out.println(
String.format("%s is equal to %s", first, second)
);
}We can’t modify a document any more. When we need to change the title, we have to create a new document:
Document first = new Document(50, "How to grill a sandwich");
first = first.title("How to cook pasta");Every time we want to modify its encapsulated state, we have to modify its identity too, because there is no identity. State is the identity. Look at the code of the equals() method above—it compares documents by their IDs and titles. Now ID+title of a document is its identity!
What About Frequent Changes?
Now I’m getting to the question we started with: What about performance and convenience? We don’t want to change the entire document every time we have to modify its title. If the document is big enough, that would be a huge obligation. Moreover, if an immutable object encapsulates other immutable objects, we have to change the entire hierarchy when modifying even a single string in one of them.
The answer is simple. A document’s title should not be part of its state. Instead, the title should be its behavior. For example, consider this:
@Immutable
class Document {
private final int id;
Document(int id) {
this.id = id;
}
public String title() {
// read title from storage
}
public void title(String text) {
// save text to storage
}
@Override
public boolean equals(Object doc) {
return doc instanceof Document
&& Document.class.cast(doc).id == this.id;
}
@Override
public String toString() {
return String.format("doc #%d about '%s'", this.id, this.title());
}
}Conceptually speaking, this document is acting as a proxy of a real-life document that has a title stored somewhere—in a file, for example. This is what a good object should do—be a proxy of a real-life entity. The document exposes two features: reading the title and saving the title. Here is how its interface would look like:
@Immutable
interface Document {
String title();
void title(String text);
}title() reads the title of the document and returns it as a String, and title(String) saves it back into the document. Imagine a real paper document with a title. You ask an object to read that title from the paper or to erase an existing one and write new text over it. This paper is a “copy” utilized in these methods.
Now we can make frequent changes to the immutable document, and the document stays the same. It doesn’t stop being immutable, since it’s state (id) is not changed. It is the same document, even though we change its title, because the title is not a state of the document. It is something in the real world, outside of the document. The document is just a proxy between us and that “something.” Reading and writing the title are behaviors of the document, not its state.
Mutable Memory
The only question we still have unanswered is what is that “copy” and what happens if we need to keep the title of the document in memory?
Let’s look at it from an “object thinking” point of view. We have a document object, which is supposed to represent a real-life entity in an object-oriented world. If such an entity is a file, we can easily implement title() methods. If such an entity is an Amazon S3 object, we also implement title reading and writing methods easily, keeping the object immutable. If such an entity is an HTTP page, we have no issues in the implementation of title reading or writing, keeping the object immutable. We have no issues as long as a real-world document exists and has its own identity. Our title reading and writing methods will communicate with that real-world document and extract or update its title.
Problems arise when such an entity doesn’t exist in a real world. In that case, we need to create a mutable object property called title, read it via title(), and modify it via title(String). But an object is immutable, so we can’t have a mutable property in it—by definition! What do we do?
Think.
How could it be that our object doesn’t represent a real-world entity? Remember, the real world is everything around the living environment of an object. Is it possible that an object doesn’t represent anyone and acts on its own? No, it’s not possible. Every object is a representative of a real-world entity. So, who does it represent if we want to keep title inside it and we don’t have any file or HTTP page behind the object?

It represents computer memory.
The title of immutable document #50, “How to grill a sandwich,” is stored in the memory, taking up 23 bytes of space. The document should know where those bytes are stored, and it should be able to read them and replace them with something else. Those 23 bytes are the real-world entity that the object represents. The bytes have nothing to do with the state of the object. They are a mutable real-world entity, similar to a file, HTTP page, or an Amazon S3 object.
Unfortunately, Java (and many other modern languages) do not allow direct access to computer memory. This is how we would design our class if such direct access was possible:
@Immutable
class Document {
private final int id;
private final Memory memory;
Document(int id) {
this.id = id;
this.memory = new Memory();
}
public String title() {
return new String(this.memory.read());
}
public void title(String text) {
this.memory.write(text.getBytes());
}
}That Memory class would be implemented by JDK natively, and all other classes would be immutable. The class Memory would have direct access to the memory heap and would be responsible for malloc and free operations on the operating system level. Having such a class would allow us to make all Java classes immutable, including StringBuffer, ByteArrayOutputStream, etc.
The Memory class would explicitly emphasize the mission of an object in a software program, which is to be a data animator. An object is not holding data; it is animating it. The data exists somewhere, and it is anemic, static, motionless, stationary, etc. The data is dead while the object is alive. The role of an object is to make a piece of data alive, to animate it but not to become a piece of data. An object needs some knowledge in order to gain access to that dead piece of data. An object may need a database unique key, an HTTP address, a file name, or a memory address in order to find the data and animate it. But an object should never think of itself as data.
What Is the Practical Solution?
Unfortunately, we don’t have such a memory-representing class in Java, Ruby, JavaScript, Python, PHP, and many other high-level languages. It looks like language designers didn’t get the idea of alive objects vs. dead data, which is sad. We’re forced to mix data with object states using the same language constructs: object variables and properties. Maybe someday we’ll have that Memory class in Java and other languages, but until then, we have a few options.
Use C++. In C++ and similar low-level languages, it is possible to access memory directly and deal with in-memory data the same way we deal with in-file or in-HTTP data. In C++, we can create that Memory class and use it exactly the way we explained above.
Use Arrays. In Java, an array is a data structure with a unique property—it can be modified while being declared as final. You can use an array of bytes as a mutable data structure inside an immutable object. It’s a surrogate solution that conceptually resembles the Memory class but is much more primitive.
Avoid In-Memory Data. Try to avoid in-memory data as much as possible. In some domains, it is easy to do; for example, in web apps, file processing, I/O adapters, etc. However, in other domains, it is much easier said than done. For example, in games, data manipulation algorithms, and GUI, most of the objects animate in-memory data mostly because memory is the only resource they have. In that case, without the Memory class, you end up with mutable objects :( There is no workaround.
To summarize, don’t forget that an object is an animator of data. It is using its encapsulated knowledge in order to reach the data. No matter where the data is stored—in a file, in HTTP, or in memory—it is conceptually very different from an object state, even though they may look very similar.
A good object is an immutable animator of mutable data. Even though it is immutable and data is mutable, it is alive and data is dead in the scope of the object’s living environment.
I often hear this argument against immutable objects: “Yes, they are useful when the state doesn’t change. However, in our case, we deal with frequently changing objects. We simply can’t afford to create a new document every time we just need to change its title.” Here is where I disagree: object title is not a state of a document, if you need to change it frequently. Instead, it is a document’s behavior. A document can and must be immutable, if it is a good object, even when its title is changed frequently. Let me explain how.

Identity, State, and Behavior
Basically, there are three elements in every object: identity, state, and behavior. Identity is what distinguishes our document from other objects, state is what a document knows about itself (a.k.a. “encapsulated knowledge”), and behavior is what a document can do for us on request. For example, this is a mutable document:
class Document {
private int id;
private String title;
Document(int id) {
this.id = id;
}
public String getTitle() {
return this.title;
}
public String setTitle(String text) {
this.title = text;
}
@Override
public String toString() {
return String.format("doc #%d about '%s'", this.id, this.text);
}
}Let’s try to use this mutable object:
Document first = new Document(50);
first.setTitle("How to grill a sandwich");
Document second = new Document(50);
second.setTitle("How to grill a sandwich");
if (first.equals(second)) { // FALSE
System.out.println(
String.format("%s is equal to %s", first, second)
);
}Here, we’re creating two objects and then modifying their encapsulated states. Obviously, first.equals(second) will return false because the two objects have different identities, even though they encapsulate the same state.
Method toString() exposes the document’s behavior—the document can convert itself to a string.
In order to modify a document’s title, we just call its setTitle() once again:
first.setTitle("How to cook pasta");Simply put, we can reuse the object many times, modifying its internal state. It is fast and convenient, isn’t it? Fast, yes. Convenient, not really. Read on.
Immutable Objects Have No Identity
As I’ve mentioned before, immutability is one of the virtues of a good object, and a very important one. A good object is immutable, and good software contains only immutable objects. The main difference between immutable and mutable objects is that an immutable one doesn’t have an identity and its state never changes. Here is an immutable variant of the same document:
@Immutable
class Document {
private final int id;
private final String title;
Document(int id, String text) {
this.id = id;
this.title = text;
}
public String title() {
return this.title;
}
public Document title(String text) {
return new Document(this.id, text);
}
@Override
public boolean equals(Object doc) {
return doc instanceof Document
&& Document.class.cast(doc).id == this.id
&& Document.class.cast(doc).title.equals(this.title);
}
@Override
public String toString() {
return String.format(
"doc #%d about '%s'", this.id, this.text
);
}
}This document is immutable, and its state (id ad title) is its identity. Let’s see how we can use this immutable class (by the way, I’m using @Immutable annotation from jcabi-aspects):
Document first = new Document(50, "How to grill a sandwich");
Document second = new Document(50, "How to grill a sandwich");
if (first.equals(second)) { // TRUE
System.out.println(
String.format("%s is equal to %s", first, second)
);
}We can’t modify a document any more. When we need to change the title, we have to create a new document:
Document first = new Document(50, "How to grill a sandwich");
first = first.title("How to cook pasta");Every time we want to modify its encapsulated state, we have to modify its identity too, because there is no identity. State is the identity. Look at the code of the equals() method above—it compares documents by their IDs and titles. Now ID+title of a document is its identity!
What About Frequent Changes?
Now I’m getting to the question we started with: What about performance and convenience? We don’t want to change the entire document every time we have to modify its title. If the document is big enough, that would be a huge obligation. Moreover, if an immutable object encapsulates other immutable objects, we have to change the entire hierarchy when modifying even a single string in one of them.
The answer is simple. A document’s title should not be part of its state. Instead, the title should be its behavior. For example, consider this:
@Immutable
class Document {
private final int id;
Document(int id) {
this.id = id;
}
public String title() {
// read title from storage
}
public void title(String text) {
// save text to storage
}
@Override
public boolean equals(Object doc) {
return doc instanceof Document
&& Document.class.cast(doc).id == this.id;
}
@Override
public String toString() {
return String.format("doc #%d about '%s'", this.id, this.title());
}
}Conceptually speaking, this document is acting as a proxy of a real-life document that has a title stored somewhere—in a file, for example. This is what a good object should do—be a proxy of a real-life entity. The document exposes two features: reading the title and saving the title. Here is how its interface would look like:
@Immutable
interface Document {
String title();
void title(String text);
}title() reads the title of the document and returns it as a String, and title(String) saves it back into the document. Imagine a real paper document with a title. You ask an object to read that title from the paper or to erase an existing one and write new text over it. This paper is a “copy” utilized in these methods.
Now we can make frequent changes to the immutable document, and the document stays the same. It doesn’t stop being immutable, since it’s state (id) is not changed. It is the same document, even though we change its title, because the title is not a state of the document. It is something in the real world, outside of the document. The document is just a proxy between us and that “something.” Reading and writing the title are behaviors of the document, not its state.
Mutable Memory
The only question we still have unanswered is what is that “copy” and what happens if we need to keep the title of the document in memory?
Let’s look at it from an “object thinking” point of view. We have a document object, which is supposed to represent a real-life entity in an object-oriented world. If such an entity is a file, we can easily implement title() methods. If such an entity is an Amazon S3 object, we also implement title reading and writing methods easily, keeping the object immutable. If such an entity is an HTTP page, we have no issues in the implementation of title reading or writing, keeping the object immutable. We have no issues as long as a real-world document exists and has its own identity. Our title reading and writing methods will communicate with that real-world document and extract or update its title.
Problems arise when such an entity doesn’t exist in a real world. In that case, we need to create a mutable object property called title, read it via title(), and modify it via title(String). But an object is immutable, so we can’t have a mutable property in it—by definition! What do we do?
Think.
How could it be that our object doesn’t represent a real-world entity? Remember, the real world is everything around the living environment of an object. Is it possible that an object doesn’t represent anyone and acts on its own? No, it’s not possible. Every object is a representative of a real-world entity. So, who does it represent if we want to keep title inside it and we don’t have any file or HTTP page behind the object?

It represents computer memory.
The title of immutable document #50, “How to grill a sandwich,” is stored in the memory, taking up 23 bytes of space. The document should know where those bytes are stored, and it should be able to read them and replace them with something else. Those 23 bytes are the real-world entity that the object represents. The bytes have nothing to do with the state of the object. They are a mutable real-world entity, similar to a file, HTTP page, or an Amazon S3 object.
Unfortunately, Java (and many other modern languages) do not allow direct access to computer memory. This is how we would design our class if such direct access was possible:
@Immutable
class Document {
private final int id;
private final Memory memory;
Document(int id) {
this.id = id;
this.memory = new Memory();
}
public String title() {
return new String(this.memory.read());
}
public void title(String text) {
this.memory.write(text.getBytes());
}
}That Memory class would be implemented by JDK natively, and all other classes would be immutable. The class Memory would have direct access to the memory heap and would be responsible for malloc and free operations on the operating system level. Having such a class would allow us to make all Java classes immutable, including StringBuffer, ByteArrayOutputStream, etc.
The Memory class would explicitly emphasize the mission of an object in a software program, which is to be a data animator. An object is not holding data; it is animating it. The data exists somewhere, and it is anemic, static, motionless, stationary, etc. The data is dead while the object is alive. The role of an object is to make a piece of data alive, to animate it but not to become a piece of data. An object needs some knowledge in order to gain access to that dead piece of data. An object may need a database unique key, an HTTP address, a file name, or a memory address in order to find the data and animate it. But an object should never think of itself as data.
What Is the Practical Solution?
Unfortunately, we don’t have such a memory-representing class in Java, Ruby, JavaScript, Python, PHP, and many other high-level languages. It looks like language designers didn’t get the idea of alive objects vs. dead data, which is sad. We’re forced to mix data with object states using the same language constructs: object variables and properties. Maybe someday we’ll have that Memory class in Java and other languages, but until then, we have a few options.
Use C++. In C++ and similar low-level languages, it is possible to access memory directly and deal with in-memory data the same way we deal with in-file or in-HTTP data. In C++, we can create that Memory class and use it exactly the way we explained above.
Use Arrays. In Java, an array is a data structure with a unique property—it can be modified while being declared as final. You can use an array of bytes as a mutable data structure inside an immutable object. It’s a surrogate solution that conceptually resembles the Memory class but is much more primitive.
Avoid In-Memory Data. Try to avoid in-memory data as much as possible. In some domains, it is easy to do; for example, in web apps, file processing, I/O adapters, etc. However, in other domains, it is much easier said than done. For example, in games, data manipulation algorithms, and GUI, most of the objects animate in-memory data mostly because memory is the only resource they have. In that case, without the Memory class, you end up with mutable objects :( There is no workaround.
To summarize, don’t forget that an object is an animator of data. It is using its encapsulated knowledge in order to reach the data. No matter where the data is stored—in a file, in HTTP, or in memory—it is conceptually very different from an object state, even though they may look very similar.
A good object is an immutable animator of mutable data. Even though it is immutable and data is mutable, it is alive and data is dead in the scope of the object’s living environment.
If you like this article, you will definitely like these very relevant posts too:
Objects Should Be Immutable
The article gives arguments about why classes/objects in object-oriented programming have to be immutable, i.e. never modify their encapsulated state
Gradients of Immutability
There are a few levels and forms of immutability in object-oriented programming, all of which can be used when they seem appropriate.
Immutable Objects Are Not Dumb
Immutable objects are not the same as passive data structures without setters, despite a very common mis-belief.
Please, use syntax highlighting in your comments, to make them more readable.

How ORM Works
Object-relational mapping (ORM) is a technique (a.k.a. design pattern) of accessing a relational database from an object-oriented language (Java, for example). There are multiple implementations of ORM in almost every language; for example: Hibernate for Java, ActiveRecord for Ruby on Rails, Doctrine for PHP, and SQLAlchemy for Python. In Java, the ORM design is even standardized as JPA.
First, let’s see how ORM works, by example. Let’s use Java, PostgreSQL, and Hibernate. Let’s say we have a single table in the database, called post:
+-----+------------+--------------------------+
| id | date | title |
+-----+------------+--------------------------+
| 9 | 10/24/2014 | How to cook a sandwich |
| 13 | 11/03/2014 | My favorite movies |
| 27 | 11/17/2014 | How much I love my job |
+-----+------------+--------------------------+Now we want to CRUD-manipulate this table from our Java app (CRUD stands for create, read, update, and delete). First, we should create a Post class (I’m sorry it’s so long, but that’s the best I can do):
@Entity
@Table(name = "post")
public class Post {
private int id;
private Date date;
private String title;
@Id
@GeneratedValue
public int getId() {
return this.id;
}
@Temporal(TemporalType.TIMESTAMP)
public Date getDate() {
return this.date;
}
public Title getTitle() {
return this.title;
}
public void setDate(Date when) {
this.date = when;
}
public void setTitle(String txt) {
this.title = txt;
}
}Before any operation with Hibernate, we have to create a session factory:
SessionFactory factory = new AnnotationConfiguration()
.configure()
.addAnnotatedClass(Post.class)
.buildSessionFactory();This factory will give us “sessions” every time we want to manipulate with Post objects. Every manipulation with the session should be wrapped in this code block:
Session session = factory.openSession();
try {
Transaction txn = session.beginTransaction();
// your manipulations with the ORM, see below
txn.commit();
} catch (HibernateException ex) {
txn.rollback();
} finally {
session.close();
}When the session is ready, here is how we get a list of all posts from that database table:
List posts = session.createQuery("FROM Post").list();
for (Post post : (List<Post>) posts){
System.out.println("Title: " + post.getTitle());
}I think it’s clear what’s going on here. Hibernate is a big, powerful engine that makes a connection to the database, executes necessary SQL SELECT requests, and retrieves the data. Then it makes instances of class Post and stuffs them with the data. When the object comes to us, it is filled with data, and we should use getters to take them out, like we’re using getTitle() above.
When we want to do a reverse operation and send an object to the database, we do all of the same but in reverse order. We make an instance of class Post, stuff it with the data, and ask Hibernate to save it:
Post post = new Post();
post.setDate(new Date());
post.setTitle("How to cook an omelette");
session.save(post);This is how almost every ORM works. The basic principle is always the same—ORM objects are anemic envelopes with data. We are talking with the ORM framework, and the framework is talking to the database. Objects only help us send our requests to the ORM framework and understand its response. Besides getters and setters, objects have no other methods. They don’t even know which database they came from.
This is how object-relational mapping works.
What’s wrong with it, you may ask? Everything!
What’s Wrong With ORM?
Seriously, what is wrong? Hibernate has been one of the most popular Java libraries for more than 10 years already. Almost every SQL-intensive application in the world is using it. Each Java tutorial would mention Hibernate (or maybe some other ORM like TopLink or OpenJPA) for a database-connected application. It’s a standard de-facto and still I’m saying that it’s wrong? Yes.
I’m claiming that the entire idea behind ORM is wrong. Its invention was maybe the second big mistake in OOP after NULL reference.
Actually, I’m not the only one saying something like this, and definitely not the first. A lot about this subject has already been published by very respected authors, including OrmHate by Martin Fowler (not against ORM, but worth mentioning anyway), Object-Relational Mapping Is the Vietnam of Computer Science by Jeff Atwood, The Vietnam of Computer Science by Ted Neward, ORM Is an Anti-Pattern by Laurie Voss, and many others.
However, my argument is different than what they’re saying. Even though their reasons are practical and valid, like “ORM is slow” or “database upgrades are hard,” they miss the main point. You can see a very good, practical answer to these practical arguments given by Bozhidar Bozhanov in his ORM Haters Don’t Get It blog post.
The main point is that ORM, instead of encapsulating database interaction inside an object, extracts it away, literally tearing a solid and cohesive living organism apart. One part of the object keeps the data while another one, implemented inside the ORM engine (session factory), knows how to deal with this data and transfers it to the relational database. Look at this picture; it illustrates what ORM is doing.
I, being a reader of posts, have to deal with two components: 1) the ORM and 2) the “ob-truncated” object returned to me. The behavior I’m interacting with is supposed to be provided through a single entry point, which is an object in OOP. In the case of ORM, I’m getting this behavior via two entry points—the ORM engine and the “thing,” which we can’t even call an object.
Because of this terrible and offensive violation of the object-oriented paradigm, we have a lot of practical issues already mentioned in respected publications. I can only add a few more.
SQL Is Not Hidden. Users of ORM should speak SQL (or its dialect, like HQL). See the example above; we’re calling session.createQuery("FROM Post") in order to get all posts. Even though it’s not SQL, it is very similar to it. Thus, the relational model is not encapsulated inside objects. Instead, it is exposed to the entire application. Everybody, with each object, inevitably has to deal with a relational model in order to get or save something. Thus, ORM doesn’t hide and wrap the SQL but pollutes the entire application with it.
Difficult to Test. When some object is working with a list of posts, it needs to deal with an instance of SessionFactory. How can we mock this dependency? We have to create a mock of it? How complex is this task? Look at the code above, and you will realize how verbose and cumbersome that unit test will be. Instead, we can write integration tests and connect the entire application to a test version of PostgreSQL. In that case, there is no need to mock SessionFactory, but such tests will be rather slow, and even more important, our having-nothing-to-do-with-the-database objects will be tested against the database instance. A terrible design.
Again, let me reiterate. Practical problems of ORM are just consequences. The fundamental drawback is that ORM tears objects apart, terribly and offensively violating the very idea of what an object is.
SQL-Speaking Objects
What is the alternative? Let me show it to you by example. Let’s try to design that class, Post, my way. We’ll have to break it down into two classes: Post and Posts, singular and plural. I already mentioned in one of my previous articles that a good object is always an abstraction of a real-life entity. Here is how this principle works in practice. We have two entities: database table and table row. That’s why we’ll make two classes; Posts will represent the table, and Post will represent the row.
As I also mentioned in that article, every object should work by contract and implement an interface. Let’s start our design with two interfaces. Of course, our objects will be immutable. Here is how Posts would look:
interface Posts {
Iterable<Post> iterate();
Post add(Date date, String title);
}This is how a single Post would look:
interface Post {
int id();
Date date();
String title();
}Here is how we will list all posts in the database table:
Posts posts = // we'll discuss this right now
for (Post post : posts.iterate()){
System.out.println("Title: " + post.title());
}Here is how we will create a new post:
Posts posts = // we'll discuss this right now
posts.add(new Date(), "How to cook an omelette");As you see, we have true objects now. They are in charge of all operations, and they perfectly hide their implementation details. There are no transactions, sessions, or factories. We don’t even know whether these objects are actually talking to the PostgreSQL or if they keep all the data in text files. All we need from Posts is an ability to list all posts for us and to create a new one. Implementation details are perfectly hidden inside. Now let’s see how we can implement these two classes.
I’m going to use jcabi-jdbc as a JDBC wrapper, but you can use something else like jOOQ, or just plain JDBC if you like. It doesn’t really matter. What matters is that your database interactions are hidden inside objects. Let’s start with Posts and implement it in class PgPosts (“pg” stands for PostgreSQL):
final class PgPosts implements Posts {
private final Source dbase;
public PgPosts(DataSource data) {
this.dbase = data;
}
public Iterable<Post> iterate() {
return new JdbcSession(this.dbase)
.sql("SELECT id FROM post")
.select(
new ListOutcome<Post>(
new ListOutcome.Mapping<Post>() {
@Override
public Post map(final ResultSet rset) {
return new PgPost(
this.dbase,
rset.getInt(1)
);
}
}
)
);
}
public Post add(Date date, String title) {
return new PgPost(
this.dbase,
new JdbcSession(this.dbase)
.sql("INSERT INTO post (date, title) VALUES (?, ?)")
.set(new Utc(date))
.set(title)
.insert(new SingleOutcome<Integer>(Integer.class))
);
}
}Next, let’s implement the Post interface in class PgPost:
final class PgPost implements Post {
private final Source dbase;
private final int number;
public PgPost(DataSource data, int id) {
this.dbase = data;
this.number = id;
}
public int id() {
return this.number;
}
public Date date() {
return new JdbcSession(this.dbase)
.sql("SELECT date FROM post WHERE id = ?")
.set(this.number)
.select(new SingleOutcome<Utc>(Utc.class));
}
public String title() {
return new JdbcSession(this.dbase)
.sql("SELECT title FROM post WHERE id = ?")
.set(this.number)
.select(new SingleOutcome<String>(String.class));
}
}This is how a full database interaction scenario would look like using the classes we just created:
Posts posts = new PgPosts(dbase);
for (Post post : posts.iterate()){
System.out.println("Title: " + post.title());
}
Post post = posts.add(
new Date(), "How to cook an omelette"
);
System.out.println("Just added post #" + post.id());You can see a full practical example here. It’s an open source web app that works with PostgreSQL using the exact approach explained above—SQL-speaking objects.
What About Performance?
I can hear you screaming, “What about performance?” In that script a few lines above, we’re making many redundant round trips to the database. First, we retrieve post IDs with SELECT id and then, in order to get their titles, we make an extra SELECT title call for each post. This is inefficient, or simply put, too slow.
No worries; this is object-oriented programming, which means it is flexible! Let’s create a decorator of PgPost that will accept all data in its constructor and cache it internally, forever:
final class ConstPost implements Post {
private final Post origin;
private final Date dte;
private final String ttl;
public ConstPost(Post post, Date date, String title) {
this.origin = post;
this.dte = date;
this.ttl = title;
}
public int id() {
return this.origin.id();
}
public Date date() {
return this.dte;
}
public String title() {
return this.ttl;
}
}Pay attention: This decorator doesn’t know anything about PostgreSQL or JDBC. It just decorates an object of type Post and pre-caches the date and title. As usual, this decorator is also immutable.
Now let’s create another implementation of Posts that will return the “constant” objects:
final class ConstPgPosts implements Posts {
// ...
public Iterable<Post> iterate() {
return new JdbcSession(this.dbase)
.sql("SELECT * FROM post")
.select(
new ListOutcome<Post>(
new ListOutcome.Mapping<Post>() {
@Override
public Post map(final ResultSet rset) {
return new ConstPost(
new PgPost(
ConstPgPosts.this.dbase,
rset.getInt(1)
),
Utc.getTimestamp(rset, 2),
rset.getString(3)
);
}
}
)
);
}
}Now all posts returned by iterate() of this new class are pre-equipped with dates and titles fetched in one round trip to the database.
Using decorators and multiple implementations of the same interface, you can compose any functionality you wish. What is the most important is that while functionality is being extended, the complexity of the design is not escalating, because classes don’t grow in size. Instead, we’re introducing new classes that stay cohesive and solid, because they are small.
What About Transactions?
Every object should deal with its own transactions and encapsulate them the same way as SELECT or INSERT queries. This will lead to nested transactions, which is perfectly fine provided the database server supports them. If there is no such support, create a session-wide transaction object that will accept a “callable” class. For example:
final class Txn {
private final DataSource dbase;
public <T> T call(Callable<T> callable) {
JdbcSession session = new JdbcSession(this.dbase);
try {
session.sql("START TRANSACTION").exec();
T result = callable.call();
session.sql("COMMIT").exec();
return result;
} catch (Exception ex) {
session.sql("ROLLBACK").exec();
throw ex;
}
}
}Then, when you want to wrap a few object manipulations in one transaction, do it like this:
new Txn(dbase).call(
new Callable<Integer>() {
@Override
public Integer call() {
Posts posts = new PgPosts(dbase);
Post post = posts.add(
new Date(), "How to cook an omelette"
);
posts.comments().post("This is my first comment!");
return post.id();
}
}
);This code will create a new post and post a comment to it. If one of the calls fail, the entire transaction will be rolled back.
This approach looks object-oriented to me. I’m calling it “SQL-speaking objects,” because they know how to speak SQL with the database server. It’s their skill, perfectly encapsulated inside their borders.
" /> anti-pattern that violates all principles of object-oriented programming, tearing objects apart and turning them into dumb and passive data bags. There is no excuse for ORM existence in any application, be it a small web app or an enterprise-size system with thousands of tables and CRUD manipulations on them. What is the alternative? SQL-speaking objects.
How ORM Works
Object-relational mapping (ORM) is a technique (a.k.a. design pattern) of accessing a relational database from an object-oriented language (Java, for example). There are multiple implementations of ORM in almost every language; for example: Hibernate for Java, ActiveRecord for Ruby on Rails, Doctrine for PHP, and SQLAlchemy for Python. In Java, the ORM design is even standardized as JPA.
First, let’s see how ORM works, by example. Let’s use Java, PostgreSQL, and Hibernate. Let’s say we have a single table in the database, called post:
+-----+------------+--------------------------+
| id | date | title |
+-----+------------+--------------------------+
| 9 | 10/24/2014 | How to cook a sandwich |
| 13 | 11/03/2014 | My favorite movies |
| 27 | 11/17/2014 | How much I love my job |
+-----+------------+--------------------------+Now we want to CRUD-manipulate this table from our Java app (CRUD stands for create, read, update, and delete). First, we should create a Post class (I’m sorry it’s so long, but that’s the best I can do):
@Entity
@Table(name = "post")
public class Post {
private int id;
private Date date;
private String title;
@Id
@GeneratedValue
public int getId() {
return this.id;
}
@Temporal(TemporalType.TIMESTAMP)
public Date getDate() {
return this.date;
}
public Title getTitle() {
return this.title;
}
public void setDate(Date when) {
this.date = when;
}
public void setTitle(String txt) {
this.title = txt;
}
}Before any operation with Hibernate, we have to create a session factory:
SessionFactory factory = new AnnotationConfiguration()
.configure()
.addAnnotatedClass(Post.class)
.buildSessionFactory();This factory will give us “sessions” every time we want to manipulate with Post objects. Every manipulation with the session should be wrapped in this code block:
Session session = factory.openSession();
try {
Transaction txn = session.beginTransaction();
// your manipulations with the ORM, see below
txn.commit();
} catch (HibernateException ex) {
txn.rollback();
} finally {
session.close();
}When the session is ready, here is how we get a list of all posts from that database table:
List posts = session.createQuery("FROM Post").list();
for (Post post : (List<Post>) posts){
System.out.println("Title: " + post.getTitle());
}I think it’s clear what’s going on here. Hibernate is a big, powerful engine that makes a connection to the database, executes necessary SQL SELECT requests, and retrieves the data. Then it makes instances of class Post and stuffs them with the data. When the object comes to us, it is filled with data, and we should use getters to take them out, like we’re using getTitle() above.
When we want to do a reverse operation and send an object to the database, we do all of the same but in reverse order. We make an instance of class Post, stuff it with the data, and ask Hibernate to save it:
Post post = new Post();
post.setDate(new Date());
post.setTitle("How to cook an omelette");
session.save(post);This is how almost every ORM works. The basic principle is always the same—ORM objects are anemic envelopes with data. We are talking with the ORM framework, and the framework is talking to the database. Objects only help us send our requests to the ORM framework and understand its response. Besides getters and setters, objects have no other methods. They don’t even know which database they came from.
This is how object-relational mapping works.
What’s wrong with it, you may ask? Everything!
What’s Wrong With ORM?
Seriously, what is wrong? Hibernate has been one of the most popular Java libraries for more than 10 years already. Almost every SQL-intensive application in the world is using it. Each Java tutorial would mention Hibernate (or maybe some other ORM like TopLink or OpenJPA) for a database-connected application. It’s a standard de-facto and still I’m saying that it’s wrong? Yes.
I’m claiming that the entire idea behind ORM is wrong. Its invention was maybe the second big mistake in OOP after NULL reference.
Actually, I’m not the only one saying something like this, and definitely not the first. A lot about this subject has already been published by very respected authors, including OrmHate by Martin Fowler (not against ORM, but worth mentioning anyway), Object-Relational Mapping Is the Vietnam of Computer Science by Jeff Atwood, The Vietnam of Computer Science by Ted Neward, ORM Is an Anti-Pattern by Laurie Voss, and many others.
However, my argument is different than what they’re saying. Even though their reasons are practical and valid, like “ORM is slow” or “database upgrades are hard,” they miss the main point. You can see a very good, practical answer to these practical arguments given by Bozhidar Bozhanov in his ORM Haters Don’t Get It blog post.
The main point is that ORM, instead of encapsulating database interaction inside an object, extracts it away, literally tearing a solid and cohesive living organism apart. One part of the object keeps the data while another one, implemented inside the ORM engine (session factory), knows how to deal with this data and transfers it to the relational database. Look at this picture; it illustrates what ORM is doing.
I, being a reader of posts, have to deal with two components: 1) the ORM and 2) the “ob-truncated” object returned to me. The behavior I’m interacting with is supposed to be provided through a single entry point, which is an object in OOP. In the case of ORM, I’m getting this behavior via two entry points—the ORM engine and the “thing,” which we can’t even call an object.
Because of this terrible and offensive violation of the object-oriented paradigm, we have a lot of practical issues already mentioned in respected publications. I can only add a few more.
SQL Is Not Hidden. Users of ORM should speak SQL (or its dialect, like HQL). See the example above; we’re calling session.createQuery("FROM Post") in order to get all posts. Even though it’s not SQL, it is very similar to it. Thus, the relational model is not encapsulated inside objects. Instead, it is exposed to the entire application. Everybody, with each object, inevitably has to deal with a relational model in order to get or save something. Thus, ORM doesn’t hide and wrap the SQL but pollutes the entire application with it.
Difficult to Test. When some object is working with a list of posts, it needs to deal with an instance of SessionFactory. How can we mock this dependency? We have to create a mock of it? How complex is this task? Look at the code above, and you will realize how verbose and cumbersome that unit test will be. Instead, we can write integration tests and connect the entire application to a test version of PostgreSQL. In that case, there is no need to mock SessionFactory, but such tests will be rather slow, and even more important, our having-nothing-to-do-with-the-database objects will be tested against the database instance. A terrible design.
Again, let me reiterate. Practical problems of ORM are just consequences. The fundamental drawback is that ORM tears objects apart, terribly and offensively violating the very idea of what an object is.
SQL-Speaking Objects
What is the alternative? Let me show it to you by example. Let’s try to design that class, Post, my way. We’ll have to break it down into two classes: Post and Posts, singular and plural. I already mentioned in one of my previous articles that a good object is always an abstraction of a real-life entity. Here is how this principle works in practice. We have two entities: database table and table row. That’s why we’ll make two classes; Posts will represent the table, and Post will represent the row.
As I also mentioned in that article, every object should work by contract and implement an interface. Let’s start our design with two interfaces. Of course, our objects will be immutable. Here is how Posts would look:
interface Posts {
Iterable<Post> iterate();
Post add(Date date, String title);
}This is how a single Post would look:
interface Post {
int id();
Date date();
String title();
}Here is how we will list all posts in the database table:
Posts posts = // we'll discuss this right now
for (Post post : posts.iterate()){
System.out.println("Title: " + post.title());
}Here is how we will create a new post:
Posts posts = // we'll discuss this right now
posts.add(new Date(), "How to cook an omelette");As you see, we have true objects now. They are in charge of all operations, and they perfectly hide their implementation details. There are no transactions, sessions, or factories. We don’t even know whether these objects are actually talking to the PostgreSQL or if they keep all the data in text files. All we need from Posts is an ability to list all posts for us and to create a new one. Implementation details are perfectly hidden inside. Now let’s see how we can implement these two classes.
I’m going to use jcabi-jdbc as a JDBC wrapper, but you can use something else like jOOQ, or just plain JDBC if you like. It doesn’t really matter. What matters is that your database interactions are hidden inside objects. Let’s start with Posts and implement it in class PgPosts (“pg” stands for PostgreSQL):
final class PgPosts implements Posts {
private final Source dbase;
public PgPosts(DataSource data) {
this.dbase = data;
}
public Iterable<Post> iterate() {
return new JdbcSession(this.dbase)
.sql("SELECT id FROM post")
.select(
new ListOutcome<Post>(
new ListOutcome.Mapping<Post>() {
@Override
public Post map(final ResultSet rset) {
return new PgPost(
this.dbase,
rset.getInt(1)
);
}
}
)
);
}
public Post add(Date date, String title) {
return new PgPost(
this.dbase,
new JdbcSession(this.dbase)
.sql("INSERT INTO post (date, title) VALUES (?, ?)")
.set(new Utc(date))
.set(title)
.insert(new SingleOutcome<Integer>(Integer.class))
);
}
}Next, let’s implement the Post interface in class PgPost:
final class PgPost implements Post {
private final Source dbase;
private final int number;
public PgPost(DataSource data, int id) {
this.dbase = data;
this.number = id;
}
public int id() {
return this.number;
}
public Date date() {
return new JdbcSession(this.dbase)
.sql("SELECT date FROM post WHERE id = ?")
.set(this.number)
.select(new SingleOutcome<Utc>(Utc.class));
}
public String title() {
return new JdbcSession(this.dbase)
.sql("SELECT title FROM post WHERE id = ?")
.set(this.number)
.select(new SingleOutcome<String>(String.class));
}
}This is how a full database interaction scenario would look like using the classes we just created:
Posts posts = new PgPosts(dbase);
for (Post post : posts.iterate()){
System.out.println("Title: " + post.title());
}
Post post = posts.add(
new Date(), "How to cook an omelette"
);
System.out.println("Just added post #" + post.id());You can see a full practical example here. It’s an open source web app that works with PostgreSQL using the exact approach explained above—SQL-speaking objects.
What About Performance?
I can hear you screaming, “What about performance?” In that script a few lines above, we’re making many redundant round trips to the database. First, we retrieve post IDs with SELECT id and then, in order to get their titles, we make an extra SELECT title call for each post. This is inefficient, or simply put, too slow.
No worries; this is object-oriented programming, which means it is flexible! Let’s create a decorator of PgPost that will accept all data in its constructor and cache it internally, forever:
final class ConstPost implements Post {
private final Post origin;
private final Date dte;
private final String ttl;
public ConstPost(Post post, Date date, String title) {
this.origin = post;
this.dte = date;
this.ttl = title;
}
public int id() {
return this.origin.id();
}
public Date date() {
return this.dte;
}
public String title() {
return this.ttl;
}
}Pay attention: This decorator doesn’t know anything about PostgreSQL or JDBC. It just decorates an object of type Post and pre-caches the date and title. As usual, this decorator is also immutable.
Now let’s create another implementation of Posts that will return the “constant” objects:
final class ConstPgPosts implements Posts {
// ...
public Iterable<Post> iterate() {
return new JdbcSession(this.dbase)
.sql("SELECT * FROM post")
.select(
new ListOutcome<Post>(
new ListOutcome.Mapping<Post>() {
@Override
public Post map(final ResultSet rset) {
return new ConstPost(
new PgPost(
ConstPgPosts.this.dbase,
rset.getInt(1)
),
Utc.getTimestamp(rset, 2),
rset.getString(3)
);
}
}
)
);
}
}Now all posts returned by iterate() of this new class are pre-equipped with dates and titles fetched in one round trip to the database.
Using decorators and multiple implementations of the same interface, you can compose any functionality you wish. What is the most important is that while functionality is being extended, the complexity of the design is not escalating, because classes don’t grow in size. Instead, we’re introducing new classes that stay cohesive and solid, because they are small.
What About Transactions?
Every object should deal with its own transactions and encapsulate them the same way as SELECT or INSERT queries. This will lead to nested transactions, which is perfectly fine provided the database server supports them. If there is no such support, create a session-wide transaction object that will accept a “callable” class. For example:
final class Txn {
private final DataSource dbase;
public <T> T call(Callable<T> callable) {
JdbcSession session = new JdbcSession(this.dbase);
try {
session.sql("START TRANSACTION").exec();
T result = callable.call();
session.sql("COMMIT").exec();
return result;
} catch (Exception ex) {
session.sql("ROLLBACK").exec();
throw ex;
}
}
}Then, when you want to wrap a few object manipulations in one transaction, do it like this:
new Txn(dbase).call(
new Callable<Integer>() {
@Override
public Integer call() {
Posts posts = new PgPosts(dbase);
Post post = posts.add(
new Date(), "How to cook an omelette"
);
posts.comments().post("This is my first comment!");
return post.id();
}
}
);This code will create a new post and post a comment to it. If one of the calls fail, the entire transaction will be rolled back.
This approach looks object-oriented to me. I’m calling it “SQL-speaking objects,” because they know how to speak SQL with the database server. It’s their skill, perfectly encapsulated inside their borders.
"/>
https://www.yegor256.com/2014/12/01/orm-offensive-anti-pattern.html
ORM Is an Offensive Anti-Pattern
- Yegor Bugayenko
- comments
- Translated:
- Japanese
- add yours!
- Discussed at:
- hackernews
TL;DR ORM is a terrible anti-pattern that violates all principles of object-oriented programming, tearing objects apart and turning them into dumb and passive data bags. There is no excuse for ORM existence in any application, be it a small web app or an enterprise-size system with thousands of tables and CRUD manipulations on them. What is the alternative? SQL-speaking objects.

How ORM Works
Object-relational mapping (ORM) is a technique (a.k.a. design pattern) of accessing a relational database from an object-oriented language (Java, for example). There are multiple implementations of ORM in almost every language; for example: Hibernate for Java, ActiveRecord for Ruby on Rails, Doctrine for PHP, and SQLAlchemy for Python. In Java, the ORM design is even standardized as JPA.
First, let’s see how ORM works, by example. Let’s use Java, PostgreSQL, and Hibernate. Let’s say we have a single table in the database, called post:
+-----+------------+--------------------------+
| id | date | title |
+-----+------------+--------------------------+
| 9 | 10/24/2014 | How to cook a sandwich |
| 13 | 11/03/2014 | My favorite movies |
| 27 | 11/17/2014 | How much I love my job |
+-----+------------+--------------------------+Now we want to CRUD-manipulate this table from our Java app (CRUD stands for create, read, update, and delete). First, we should create a Post class (I’m sorry it’s so long, but that’s the best I can do):
@Entity
@Table(name = "post")
public class Post {
private int id;
private Date date;
private String title;
@Id
@GeneratedValue
public int getId() {
return this.id;
}
@Temporal(TemporalType.TIMESTAMP)
public Date getDate() {
return this.date;
}
public Title getTitle() {
return this.title;
}
public void setDate(Date when) {
this.date = when;
}
public void setTitle(String txt) {
this.title = txt;
}
}Before any operation with Hibernate, we have to create a session factory:
SessionFactory factory = new AnnotationConfiguration()
.configure()
.addAnnotatedClass(Post.class)
.buildSessionFactory();This factory will give us “sessions” every time we want to manipulate with Post objects. Every manipulation with the session should be wrapped in this code block:
Session session = factory.openSession();
try {
Transaction txn = session.beginTransaction();
// your manipulations with the ORM, see below
txn.commit();
} catch (HibernateException ex) {
txn.rollback();
} finally {
session.close();
}When the session is ready, here is how we get a list of all posts from that database table:
List posts = session.createQuery("FROM Post").list();
for (Post post : (List<Post>) posts){
System.out.println("Title: " + post.getTitle());
}I think it’s clear what’s going on here. Hibernate is a big, powerful engine that makes a connection to the database, executes necessary SQL SELECT requests, and retrieves the data. Then it makes instances of class Post and stuffs them with the data. When the object comes to us, it is filled with data, and we should use getters to take them out, like we’re using getTitle() above.
When we want to do a reverse operation and send an object to the database, we do all of the same but in reverse order. We make an instance of class Post, stuff it with the data, and ask Hibernate to save it:
Post post = new Post();
post.setDate(new Date());
post.setTitle("How to cook an omelette");
session.save(post);This is how almost every ORM works. The basic principle is always the same—ORM objects are anemic envelopes with data. We are talking with the ORM framework, and the framework is talking to the database. Objects only help us send our requests to the ORM framework and understand its response. Besides getters and setters, objects have no other methods. They don’t even know which database they came from.
This is how object-relational mapping works.
What’s wrong with it, you may ask? Everything!
What’s Wrong With ORM?
Seriously, what is wrong? Hibernate has been one of the most popular Java libraries for more than 10 years already. Almost every SQL-intensive application in the world is using it. Each Java tutorial would mention Hibernate (or maybe some other ORM like TopLink or OpenJPA) for a database-connected application. It’s a standard de-facto and still I’m saying that it’s wrong? Yes.
I’m claiming that the entire idea behind ORM is wrong. Its invention was maybe the second big mistake in OOP after NULL reference.
Actually, I’m not the only one saying something like this, and definitely not the first. A lot about this subject has already been published by very respected authors, including OrmHate by Martin Fowler (not against ORM, but worth mentioning anyway), Object-Relational Mapping Is the Vietnam of Computer Science by Jeff Atwood, The Vietnam of Computer Science by Ted Neward, ORM Is an Anti-Pattern by Laurie Voss, and many others.
However, my argument is different than what they’re saying. Even though their reasons are practical and valid, like “ORM is slow” or “database upgrades are hard,” they miss the main point. You can see a very good, practical answer to these practical arguments given by Bozhidar Bozhanov in his ORM Haters Don’t Get It blog post.
The main point is that ORM, instead of encapsulating database interaction inside an object, extracts it away, literally tearing a solid and cohesive living organism apart. One part of the object keeps the data while another one, implemented inside the ORM engine (session factory), knows how to deal with this data and transfers it to the relational database. Look at this picture; it illustrates what ORM is doing.
I, being a reader of posts, have to deal with two components: 1) the ORM and 2) the “ob-truncated” object returned to me. The behavior I’m interacting with is supposed to be provided through a single entry point, which is an object in OOP. In the case of ORM, I’m getting this behavior via two entry points—the ORM engine and the “thing,” which we can’t even call an object.
Because of this terrible and offensive violation of the object-oriented paradigm, we have a lot of practical issues already mentioned in respected publications. I can only add a few more.
SQL Is Not Hidden. Users of ORM should speak SQL (or its dialect, like HQL). See the example above; we’re calling session.createQuery("FROM Post") in order to get all posts. Even though it’s not SQL, it is very similar to it. Thus, the relational model is not encapsulated inside objects. Instead, it is exposed to the entire application. Everybody, with each object, inevitably has to deal with a relational model in order to get or save something. Thus, ORM doesn’t hide and wrap the SQL but pollutes the entire application with it.
Difficult to Test. When some object is working with a list of posts, it needs to deal with an instance of SessionFactory. How can we mock this dependency? We have to create a mock of it? How complex is this task? Look at the code above, and you will realize how verbose and cumbersome that unit test will be. Instead, we can write integration tests and connect the entire application to a test version of PostgreSQL. In that case, there is no need to mock SessionFactory, but such tests will be rather slow, and even more important, our having-nothing-to-do-with-the-database objects will be tested against the database instance. A terrible design.
Again, let me reiterate. Practical problems of ORM are just consequences. The fundamental drawback is that ORM tears objects apart, terribly and offensively violating the very idea of what an object is.
SQL-Speaking Objects
What is the alternative? Let me show it to you by example. Let’s try to design that class, Post, my way. We’ll have to break it down into two classes: Post and Posts, singular and plural. I already mentioned in one of my previous articles that a good object is always an abstraction of a real-life entity. Here is how this principle works in practice. We have two entities: database table and table row. That’s why we’ll make two classes; Posts will represent the table, and Post will represent the row.
As I also mentioned in that article, every object should work by contract and implement an interface. Let’s start our design with two interfaces. Of course, our objects will be immutable. Here is how Posts would look:
interface Posts {
Iterable<Post> iterate();
Post add(Date date, String title);
}This is how a single Post would look:
interface Post {
int id();
Date date();
String title();
}Here is how we will list all posts in the database table:
Posts posts = // we'll discuss this right now
for (Post post : posts.iterate()){
System.out.println("Title: " + post.title());
}Here is how we will create a new post:
Posts posts = // we'll discuss this right now
posts.add(new Date(), "How to cook an omelette");As you see, we have true objects now. They are in charge of all operations, and they perfectly hide their implementation details. There are no transactions, sessions, or factories. We don’t even know whether these objects are actually talking to the PostgreSQL or if they keep all the data in text files. All we need from Posts is an ability to list all posts for us and to create a new one. Implementation details are perfectly hidden inside. Now let’s see how we can implement these two classes.
I’m going to use jcabi-jdbc as a JDBC wrapper, but you can use something else like jOOQ, or just plain JDBC if you like. It doesn’t really matter. What matters is that your database interactions are hidden inside objects. Let’s start with Posts and implement it in class PgPosts (“pg” stands for PostgreSQL):
final class PgPosts implements Posts {
private final Source dbase;
public PgPosts(DataSource data) {
this.dbase = data;
}
public Iterable<Post> iterate() {
return new JdbcSession(this.dbase)
.sql("SELECT id FROM post")
.select(
new ListOutcome<Post>(
new ListOutcome.Mapping<Post>() {
@Override
public Post map(final ResultSet rset) {
return new PgPost(
this.dbase,
rset.getInt(1)
);
}
}
)
);
}
public Post add(Date date, String title) {
return new PgPost(
this.dbase,
new JdbcSession(this.dbase)
.sql("INSERT INTO post (date, title) VALUES (?, ?)")
.set(new Utc(date))
.set(title)
.insert(new SingleOutcome<Integer>(Integer.class))
);
}
}Next, let’s implement the Post interface in class PgPost:
final class PgPost implements Post {
private final Source dbase;
private final int number;
public PgPost(DataSource data, int id) {
this.dbase = data;
this.number = id;
}
public int id() {
return this.number;
}
public Date date() {
return new JdbcSession(this.dbase)
.sql("SELECT date FROM post WHERE id = ?")
.set(this.number)
.select(new SingleOutcome<Utc>(Utc.class));
}
public String title() {
return new JdbcSession(this.dbase)
.sql("SELECT title FROM post WHERE id = ?")
.set(this.number)
.select(new SingleOutcome<String>(String.class));
}
}This is how a full database interaction scenario would look like using the classes we just created:
Posts posts = new PgPosts(dbase);
for (Post post : posts.iterate()){
System.out.println("Title: " + post.title());
}
Post post = posts.add(
new Date(), "How to cook an omelette"
);
System.out.println("Just added post #" + post.id());You can see a full practical example here. It’s an open source web app that works with PostgreSQL using the exact approach explained above—SQL-speaking objects.
What About Performance?
I can hear you screaming, “What about performance?” In that script a few lines above, we’re making many redundant round trips to the database. First, we retrieve post IDs with SELECT id and then, in order to get their titles, we make an extra SELECT title call for each post. This is inefficient, or simply put, too slow.
No worries; this is object-oriented programming, which means it is flexible! Let’s create a decorator of PgPost that will accept all data in its constructor and cache it internally, forever:
final class ConstPost implements Post {
private final Post origin;
private final Date dte;
private final String ttl;
public ConstPost(Post post, Date date, String title) {
this.origin = post;
this.dte = date;
this.ttl = title;
}
public int id() {
return this.origin.id();
}
public Date date() {
return this.dte;
}
public String title() {
return this.ttl;
}
}Pay attention: This decorator doesn’t know anything about PostgreSQL or JDBC. It just decorates an object of type Post and pre-caches the date and title. As usual, this decorator is also immutable.
Now let’s create another implementation of Posts that will return the “constant” objects:
final class ConstPgPosts implements Posts {
// ...
public Iterable<Post> iterate() {
return new JdbcSession(this.dbase)
.sql("SELECT * FROM post")
.select(
new ListOutcome<Post>(
new ListOutcome.Mapping<Post>() {
@Override
public Post map(final ResultSet rset) {
return new ConstPost(
new PgPost(
ConstPgPosts.this.dbase,
rset.getInt(1)
),
Utc.getTimestamp(rset, 2),
rset.getString(3)
);
}
}
)
);
}
}Now all posts returned by iterate() of this new class are pre-equipped with dates and titles fetched in one round trip to the database.
Using decorators and multiple implementations of the same interface, you can compose any functionality you wish. What is the most important is that while functionality is being extended, the complexity of the design is not escalating, because classes don’t grow in size. Instead, we’re introducing new classes that stay cohesive and solid, because they are small.
What About Transactions?
Every object should deal with its own transactions and encapsulate them the same way as SELECT or INSERT queries. This will lead to nested transactions, which is perfectly fine provided the database server supports them. If there is no such support, create a session-wide transaction object that will accept a “callable” class. For example:
final class Txn {
private final DataSource dbase;
public <T> T call(Callable<T> callable) {
JdbcSession session = new JdbcSession(this.dbase);
try {
session.sql("START TRANSACTION").exec();
T result = callable.call();
session.sql("COMMIT").exec();
return result;
} catch (Exception ex) {
session.sql("ROLLBACK").exec();
throw ex;
}
}
}Then, when you want to wrap a few object manipulations in one transaction, do it like this:
new Txn(dbase).call(
new Callable<Integer>() {
@Override
public Integer call() {
Posts posts = new PgPosts(dbase);
Post post = posts.add(
new Date(), "How to cook an omelette"
);
posts.comments().post("This is my first comment!");
return post.id();
}
}
);This code will create a new post and post a comment to it. If one of the calls fail, the entire transaction will be rolled back.
This approach looks object-oriented to me. I’m calling it “SQL-speaking objects,” because they know how to speak SQL with the database server. It’s their skill, perfectly encapsulated inside their borders.
TL;DR ORM is a terrible anti-pattern that violates all principles of object-oriented programming, tearing objects apart and turning them into dumb and passive data bags. There is no excuse for ORM existence in any application, be it a small web app or an enterprise-size system with thousands of tables and CRUD manipulations on them. What is the alternative? SQL-speaking objects.

How ORM Works
Object-relational mapping (ORM) is a technique (a.k.a. design pattern) of accessing a relational database from an object-oriented language (Java, for example). There are multiple implementations of ORM in almost every language; for example: Hibernate for Java, ActiveRecord for Ruby on Rails, Doctrine for PHP, and SQLAlchemy for Python. In Java, the ORM design is even standardized as JPA.
First, let’s see how ORM works, by example. Let’s use Java, PostgreSQL, and Hibernate. Let’s say we have a single table in the database, called post:
+-----+------------+--------------------------+
| id | date | title |
+-----+------------+--------------------------+
| 9 | 10/24/2014 | How to cook a sandwich |
| 13 | 11/03/2014 | My favorite movies |
| 27 | 11/17/2014 | How much I love my job |
+-----+------------+--------------------------+Now we want to CRUD-manipulate this table from our Java app (CRUD stands for create, read, update, and delete). First, we should create a Post class (I’m sorry it’s so long, but that’s the best I can do):
@Entity
@Table(name = "post")
public class Post {
private int id;
private Date date;
private String title;
@Id
@GeneratedValue
public int getId() {
return this.id;
}
@Temporal(TemporalType.TIMESTAMP)
public Date getDate() {
return this.date;
}
public Title getTitle() {
return this.title;
}
public void setDate(Date when) {
this.date = when;
}
public void setTitle(String txt) {
this.title = txt;
}
}Before any operation with Hibernate, we have to create a session factory:
SessionFactory factory = new AnnotationConfiguration()
.configure()
.addAnnotatedClass(Post.class)
.buildSessionFactory();This factory will give us “sessions” every time we want to manipulate with Post objects. Every manipulation with the session should be wrapped in this code block:
Session session = factory.openSession();
try {
Transaction txn = session.beginTransaction();
// your manipulations with the ORM, see below
txn.commit();
} catch (HibernateException ex) {
txn.rollback();
} finally {
session.close();
}When the session is ready, here is how we get a list of all posts from that database table:
List posts = session.createQuery("FROM Post").list();
for (Post post : (List<Post>) posts){
System.out.println("Title: " + post.getTitle());
}I think it’s clear what’s going on here. Hibernate is a big, powerful engine that makes a connection to the database, executes necessary SQL SELECT requests, and retrieves the data. Then it makes instances of class Post and stuffs them with the data. When the object comes to us, it is filled with data, and we should use getters to take them out, like we’re using getTitle() above.
When we want to do a reverse operation and send an object to the database, we do all of the same but in reverse order. We make an instance of class Post, stuff it with the data, and ask Hibernate to save it:
Post post = new Post();
post.setDate(new Date());
post.setTitle("How to cook an omelette");
session.save(post);This is how almost every ORM works. The basic principle is always the same—ORM objects are anemic envelopes with data. We are talking with the ORM framework, and the framework is talking to the database. Objects only help us send our requests to the ORM framework and understand its response. Besides getters and setters, objects have no other methods. They don’t even know which database they came from.
This is how object-relational mapping works.
What’s wrong with it, you may ask? Everything!
What’s Wrong With ORM?
Seriously, what is wrong? Hibernate has been one of the most popular Java libraries for more than 10 years already. Almost every SQL-intensive application in the world is using it. Each Java tutorial would mention Hibernate (or maybe some other ORM like TopLink or OpenJPA) for a database-connected application. It’s a standard de-facto and still I’m saying that it’s wrong? Yes.
I’m claiming that the entire idea behind ORM is wrong. Its invention was maybe the second big mistake in OOP after NULL reference.
Actually, I’m not the only one saying something like this, and definitely not the first. A lot about this subject has already been published by very respected authors, including OrmHate by Martin Fowler (not against ORM, but worth mentioning anyway), Object-Relational Mapping Is the Vietnam of Computer Science by Jeff Atwood, The Vietnam of Computer Science by Ted Neward, ORM Is an Anti-Pattern by Laurie Voss, and many others.
However, my argument is different than what they’re saying. Even though their reasons are practical and valid, like “ORM is slow” or “database upgrades are hard,” they miss the main point. You can see a very good, practical answer to these practical arguments given by Bozhidar Bozhanov in his ORM Haters Don’t Get It blog post.
The main point is that ORM, instead of encapsulating database interaction inside an object, extracts it away, literally tearing a solid and cohesive living organism apart. One part of the object keeps the data while another one, implemented inside the ORM engine (session factory), knows how to deal with this data and transfers it to the relational database. Look at this picture; it illustrates what ORM is doing.
I, being a reader of posts, have to deal with two components: 1) the ORM and 2) the “ob-truncated” object returned to me. The behavior I’m interacting with is supposed to be provided through a single entry point, which is an object in OOP. In the case of ORM, I’m getting this behavior via two entry points—the ORM engine and the “thing,” which we can’t even call an object.
Because of this terrible and offensive violation of the object-oriented paradigm, we have a lot of practical issues already mentioned in respected publications. I can only add a few more.
SQL Is Not Hidden. Users of ORM should speak SQL (or its dialect, like HQL). See the example above; we’re calling session.createQuery("FROM Post") in order to get all posts. Even though it’s not SQL, it is very similar to it. Thus, the relational model is not encapsulated inside objects. Instead, it is exposed to the entire application. Everybody, with each object, inevitably has to deal with a relational model in order to get or save something. Thus, ORM doesn’t hide and wrap the SQL but pollutes the entire application with it.
Difficult to Test. When some object is working with a list of posts, it needs to deal with an instance of SessionFactory. How can we mock this dependency? We have to create a mock of it? How complex is this task? Look at the code above, and you will realize how verbose and cumbersome that unit test will be. Instead, we can write integration tests and connect the entire application to a test version of PostgreSQL. In that case, there is no need to mock SessionFactory, but such tests will be rather slow, and even more important, our having-nothing-to-do-with-the-database objects will be tested against the database instance. A terrible design.
Again, let me reiterate. Practical problems of ORM are just consequences. The fundamental drawback is that ORM tears objects apart, terribly and offensively violating the very idea of what an object is.
SQL-Speaking Objects
What is the alternative? Let me show it to you by example. Let’s try to design that class, Post, my way. We’ll have to break it down into two classes: Post and Posts, singular and plural. I already mentioned in one of my previous articles that a good object is always an abstraction of a real-life entity. Here is how this principle works in practice. We have two entities: database table and table row. That’s why we’ll make two classes; Posts will represent the table, and Post will represent the row.
As I also mentioned in that article, every object should work by contract and implement an interface. Let’s start our design with two interfaces. Of course, our objects will be immutable. Here is how Posts would look:
interface Posts {
Iterable<Post> iterate();
Post add(Date date, String title);
}This is how a single Post would look:
interface Post {
int id();
Date date();
String title();
}Here is how we will list all posts in the database table:
Posts posts = // we'll discuss this right now
for (Post post : posts.iterate()){
System.out.println("Title: " + post.title());
}Here is how we will create a new post:
Posts posts = // we'll discuss this right now
posts.add(new Date(), "How to cook an omelette");As you see, we have true objects now. They are in charge of all operations, and they perfectly hide their implementation details. There are no transactions, sessions, or factories. We don’t even know whether these objects are actually talking to the PostgreSQL or if they keep all the data in text files. All we need from Posts is an ability to list all posts for us and to create a new one. Implementation details are perfectly hidden inside. Now let’s see how we can implement these two classes.
I’m going to use jcabi-jdbc as a JDBC wrapper, but you can use something else like jOOQ, or just plain JDBC if you like. It doesn’t really matter. What matters is that your database interactions are hidden inside objects. Let’s start with Posts and implement it in class PgPosts (“pg” stands for PostgreSQL):
final class PgPosts implements Posts {
private final Source dbase;
public PgPosts(DataSource data) {
this.dbase = data;
}
public Iterable<Post> iterate() {
return new JdbcSession(this.dbase)
.sql("SELECT id FROM post")
.select(
new ListOutcome<Post>(
new ListOutcome.Mapping<Post>() {
@Override
public Post map(final ResultSet rset) {
return new PgPost(
this.dbase,
rset.getInt(1)
);
}
}
)
);
}
public Post add(Date date, String title) {
return new PgPost(
this.dbase,
new JdbcSession(this.dbase)
.sql("INSERT INTO post (date, title) VALUES (?, ?)")
.set(new Utc(date))
.set(title)
.insert(new SingleOutcome<Integer>(Integer.class))
);
}
}Next, let’s implement the Post interface in class PgPost:
final class PgPost implements Post {
private final Source dbase;
private final int number;
public PgPost(DataSource data, int id) {
this.dbase = data;
this.number = id;
}
public int id() {
return this.number;
}
public Date date() {
return new JdbcSession(this.dbase)
.sql("SELECT date FROM post WHERE id = ?")
.set(this.number)
.select(new SingleOutcome<Utc>(Utc.class));
}
public String title() {
return new JdbcSession(this.dbase)
.sql("SELECT title FROM post WHERE id = ?")
.set(this.number)
.select(new SingleOutcome<String>(String.class));
}
}This is how a full database interaction scenario would look like using the classes we just created:
Posts posts = new PgPosts(dbase);
for (Post post : posts.iterate()){
System.out.println("Title: " + post.title());
}
Post post = posts.add(
new Date(), "How to cook an omelette"
);
System.out.println("Just added post #" + post.id());You can see a full practical example here. It’s an open source web app that works with PostgreSQL using the exact approach explained above—SQL-speaking objects.
What About Performance?
I can hear you screaming, “What about performance?” In that script a few lines above, we’re making many redundant round trips to the database. First, we retrieve post IDs with SELECT id and then, in order to get their titles, we make an extra SELECT title call for each post. This is inefficient, or simply put, too slow.
No worries; this is object-oriented programming, which means it is flexible! Let’s create a decorator of PgPost that will accept all data in its constructor and cache it internally, forever:
final class ConstPost implements Post {
private final Post origin;
private final Date dte;
private final String ttl;
public ConstPost(Post post, Date date, String title) {
this.origin = post;
this.dte = date;
this.ttl = title;
}
public int id() {
return this.origin.id();
}
public Date date() {
return this.dte;
}
public String title() {
return this.ttl;
}
}Pay attention: This decorator doesn’t know anything about PostgreSQL or JDBC. It just decorates an object of type Post and pre-caches the date and title. As usual, this decorator is also immutable.
Now let’s create another implementation of Posts that will return the “constant” objects:
final class ConstPgPosts implements Posts {
// ...
public Iterable<Post> iterate() {
return new JdbcSession(this.dbase)
.sql("SELECT * FROM post")
.select(
new ListOutcome<Post>(
new ListOutcome.Mapping<Post>() {
@Override
public Post map(final ResultSet rset) {
return new ConstPost(
new PgPost(
ConstPgPosts.this.dbase,
rset.getInt(1)
),
Utc.getTimestamp(rset, 2),
rset.getString(3)
);
}
}
)
);
}
}Now all posts returned by iterate() of this new class are pre-equipped with dates and titles fetched in one round trip to the database.
Using decorators and multiple implementations of the same interface, you can compose any functionality you wish. What is the most important is that while functionality is being extended, the complexity of the design is not escalating, because classes don’t grow in size. Instead, we’re introducing new classes that stay cohesive and solid, because they are small.
What About Transactions?
Every object should deal with its own transactions and encapsulate them the same way as SELECT or INSERT queries. This will lead to nested transactions, which is perfectly fine provided the database server supports them. If there is no such support, create a session-wide transaction object that will accept a “callable” class. For example:
final class Txn {
private final DataSource dbase;
public <T> T call(Callable<T> callable) {
JdbcSession session = new JdbcSession(this.dbase);
try {
session.sql("START TRANSACTION").exec();
T result = callable.call();
session.sql("COMMIT").exec();
return result;
} catch (Exception ex) {
session.sql("ROLLBACK").exec();
throw ex;
}
}
}Then, when you want to wrap a few object manipulations in one transaction, do it like this:
new Txn(dbase).call(
new Callable<Integer>() {
@Override
public Integer call() {
Posts posts = new PgPosts(dbase);
Post post = posts.add(
new Date(), "How to cook an omelette"
);
posts.comments().post("This is my first comment!");
return post.id();
}
}
);This code will create a new post and post a comment to it. If one of the calls fail, the entire transaction will be rolled back.
This approach looks object-oriented to me. I’m calling it “SQL-speaking objects,” because they know how to speak SQL with the database server. It’s their skill, perfectly encapsulated inside their borders.
Please, use syntax highlighting in your comments, to make them more readable.
A library is essentially a set of functions that you can call, these days usually organized into classes.
Functions organized into classes? With all due respect, this is wrong. And it is a very common misconception of a class in object-oriented programming. Classes are not organizers of functions. And objects are not data structures.
So what is a “proper” object? Which one is not a proper one? What is the difference? Even though it is a very polemic subject, it is very important. Unless we understand what an object is, how can we write object-oriented software? Well, thanks to Java, Ruby, and others, we can. But how good will it be? Unfortunately, this is not an exact science, and there are many opinions. Here is my list of qualities of a good object.
Class vs. Object

Before we start talking about objects, let’s define what a class is. It is a place where objects are being born (a.k.a. instantiated). The main responsibility of a class is to construct new objects on demand and destruct them when they are not used anymore. A class knows how its children should look and how they should behave. In other words, it knows what contracts they should obey.
Sometimes I hear classes being called “object templates” (for example, Wikipedia says so). This definition is not correct because it places classes into a passive position. This definition assumes that someone will get a template and build an object by using it. This may be true, technically speaking, but conceptually it’s wrong. Nobody else should be involved—there are only a class and its children. An object asks a class to create another object, and the class constructs it; that’s it. Ruby expresses this concept much better than Java or C++:
photo = File.new('/tmp/photo.png')The object photo is constructed by the class File (new is an entry point to the class). Once constructed, the object is acting on its own. It shouldn’t know who constructed it and how many more brothers and sisters it has in the class. Yes, I mean that reflection is a terrible idea, but I’ll write more about it in one of the next posts :) Now, let’s talk about objects and their best and worst sides.
1. He Exists in Real Life

First of all, an object is a living organism. Moreover, an object should be anthropomorphized, i.e. treated like a human being (or a pet, if you like them more). By this I basically mean that an object is not a data structure or a collection of functions. Instead, it is an independent entity with its own life cycle, its own behavior, and its own habits.
An employee, a department, an HTTP request, a table in MySQL, a line in a file, or a file itself are proper objects—because they exist in real life, even when our software is turned off. To be more precise, an object is a representative of a real-life creature. It is a proxy of that real-life creature in front of all other objects. Without such a creature, there is—obviously—no object.
photo = File.new('/tmp/photo.png')
puts photo.width()In this example, I’m asking File to construct a new object photo, which will be a representative of a real file on disk. You may say that a file is also something virtual and exists only when the computer is turned on. I would agree and refine the definition of “real life” as follows: It is everything that exists aside from the scope of the program the object lives in. The disk file is outside the scope of our program; that’s why it is perfectly correct to create its representative inside the program.
A controller, a parser, a filter, a validator, a service locator, a singleton, or a factory are not good objects (yes, most GoF patterns are anti-patterns!). They don’t exist apart from your software, in real life. They are invented just to tie other objects together. They are artificial and fake creatures. They don’t represent anyone. Seriously, an XML parser—who does it represent? Nobody.
Some of them may become good if they change their names; others can never excuse their existence. For example, that XML parser can be renamed to “parseable XML” and start to represent an XML document that exists outside of our scope.
Always ask yourself, “What is the real-life entity behind my object?” If you can’t find an answer, start thinking about refactoring.
2. He Works by Contracts

A good object always works by contracts. He expects to be hired not because of his personal merits but because he obeys the contracts. On the other hand, when we hire an object, we shouldn’t discriminate and expect some specific object from a specific class to do the work for us. We should expect any object to do what our contract says. As long as the object does what we need, we should not be interested in his class of origin, his sex, or his religion.
For example, I need to show a photo on the screen. I want that photo to be read from a file in PNG format. I’m contracting an object from class DataFile and asking him to give me the binary content of that image.
But wait, do I care where exactly the content will come from—the file on disk, or an HTTP request, or maybe a document in Dropbox? Actually, I don’t. All I care about is that some object gives me a byte array with PNG content. So my contract would look like this:
interface Binary {
byte[] read();
}Now, any object from any class (not just DataFile) can work for me. All he has to do, in order to be eligible, is to obey the contract—by implementing the interface Binary.
The rule here is simple: every public method in a good object should implement his counterpart from an interface. If your object has public methods that are not inherited from any interface, he is badly designed.
There are two practical reasons for this. First, an object working without a contract is impossible to mock in a unit test. Second, a contract-less object is impossible to extend via decoration.
3. He Is Unique
A good object should always encapsulate something in order to be unique. If there is nothing to encapsulate, an object may have identical clones, which I believe is bad. Here is an example of a bad object, which may have clones:
class HTTPStatus implements Status {
private URL page = new URL("http://localhost");
@Override
public int read() throws IOException {
return HttpURLConnection.class.cast(
this.page.openConnection()
).getResponseCode();
}
}I can create a few instances of class HTTPStatus, and all of them will be equal to each other:
first = new HTTPStatus();
second = new HTTPStatus();
assert first.equals(second);Obviously utility classes, which have only static methods, can’t instantiate good objects. More generally, utility classes don’t have any of the merits mentioned in this article and can’t even be called “classes.” They are simply terrible abusers of an object paradigm and exist in modern object-oriented languages only because their inventors enabled static methods.
4. He Is Immutable
A good object should never change his encapsulated state. Remember, an object is a representative of a real-life entity, and this entity should stay the same through the entire life of the object. In other words, an object should never betray those whom he represents. He should never change owners. :)
Be aware that immutability doesn’t mean that all methods always return the same values. Instead, a good immutable object is very dynamic. However, he never changes his internal state. For example:
@Immutable
final class HTTPStatus implements Status {
private URL page;
public HTTPStatus(URL url) {
this.page = url;
}
@Override
public int read() throws IOException {
return HttpURLConnection.class.cast(
this.page.openConnection()
).getResponseCode();
}
}Even though the method read() may return different values, the object is immutable. He points to a certain web page and will never point anywhere else. He will never change his encapsulated state, and he will never betray the URL he represents.
Why is immutability a virtue? This article explains in detail: Objects Should Be Immutable. In a nutshell, immutable objects are better because:
- Immutable objects are simpler to construct, test, and use.
- Truly immutable objects are always thread-safe.
- They help avoid temporal coupling.
- Their usage is side-effect free (no defensive copies).
- They always have failure atomicity.
- They are much easier to cache.
- They prevent NULL references.
Of course, a good object doesn’t have setters, which may change his state and force him to betray the URL. In other words, introducing a setURL() method would be a terrible mistake in class HTTPStatus.
Besides all that, immutable objects will force you to make more cohesive, solid, and understandable designs, as this article explains: How Immutability Helps.
5. His Class Doesn’t Have Anything Static
A static method implements a behavior of a class, not an object. Let’s say we have class File, and his children have method size():
final class File implements Measurable {
@Override
public int size() {
// calculate the size of the file and return
}
}So far, so good; the method size() is there because of the contract Measurable, and every object of class File will be able to measure his size. A terrible mistake would be to design this class with a static method instead (this design is also known as a utility class and is very popular in Java, Ruby, and almost every OOP language):
// TERRIBLE DESIGN, DON'T USE!
class File {
public static int size(String file) {
// calculate the size of the file and return
}
}This design runs completely against the object-oriented paradigm. Why? Because static methods turn object-oriented programming into “class-oriented” programming. This method, size(), exposes the behavior of the class, not of his objects. What’s wrong with this, you may ask? Why can’t we have both objects and classes as first-class citizens in our code? Why can’t both of them have methods and properties?
The problem is that with class-oriented programming, decomposition doesn’t work anymore. We can’t break down a complex problem into parts, because only a single instance of a class exists in the entire program. The power of OOP is that it allows us to use objects as an instrument for scope decomposition. When I instantiate an object inside a method, he is dedicated to my specific task. He is perfectly isolated from all other objects around the method. This object is a local variable in the scope of the method. A class, with his static methods, is always a global variable no matter where I use him. Because of that, I can’t isolate my interaction with this variable from others.
Besides being conceptually against object-oriented principles, public static methods have a few practical drawbacks:
First, it’s impossible to mock them (Well, you can use PowerMock, but this will then be the most terrible decision you could make in a Java project… I made it once, a few years ago).
Second, they are not thread-safe by definition, because they always work with static variables, which are accessible from all threads. You can make them thread-safe, but this will always require explicit synchronization.
Every time you see a public static method, start rewriting immediately. I don’t even want to mention how terrible static (or global) variables are. I think it is just obvious.
6. His Name Is Not a Job Title

The name of an object should tell us what this object is, not what it does, just like we name objects in real life: book instead of page aggregator, cup instead of water holder, T-shirt instead of body dresser. There are exceptions, of course, like printer or computer, but they were invented just recently and by those who didn’t read this article. :)
For example, these names tell us who their owners are: an apple, a file, a series of HTTP requests, a socket, an XML document, a list of users, a regular expression, an integer, a PostgreSQL table, or Jeffrey Lebowski. A properly named object is always possible to draw as a small picture. Even a regular expression can be drawn.
In the opposite, here is an example of names that tell us what their owners do: a file reader, a text parser, a URL validator, an XML printer, a service locator, a singleton, a script runner, or a Java programmer. Can you draw any of them? No, you can’t. These names are not suitable for good objects. They are terrible names that lead to terrible design.
In general, avoid names that end with “-er”—most of them are bad.
“What is the alternative of a FileReader?” I hear you asking. What would be a better name? Let’s see. We already have File, which is a representative of a real-world file on disk. This representative is not powerful enough for us, because he doesn’t know how to read the content of the file. We want to create a more powerful one that will have that ability. What would we call him? Remember, the name should say what he is, not what he does. What is he? He is a file that has data; not just a file, like File, but a more sophisticated one, with data. So how about FileWithData or simply DataFile?
The same logic should be applicable to all other names. Always think about what it is rather than what it does. Give your objects real, meaningful names instead of job titles.
More about this in Don’t Create Objects That End With -ER.
7. His Class Is Either Final or Abstract

A good object comes from either a final or abstract class. A final class is one that can’t be extended via inheritance. An abstract class is one that can’t have instances. Simply put, a class should either say, “You can never break me; I’m a black box for you” or “I’m broken already; fix me first and then use.”
There is nothing in between. A final class is a black box that you can’t modify by any means. He works as he works, and you either use him or throw him away. You can’t create another class that will inherit his properties. This is not allowed because of that final modifier. The only way to extend such a final class is through decoration of his children. Let’s say I have the class HTTPStatus (see above), and I don’t like him. Well, I like him, but he’s not powerful enough for me. I want him to throw an exception if HTTP status is over 400. I want his method, read(), to do more that it does now. A traditional way would be to extend the class and overwrite his method:
class OnlyValidStatus extends HTTPStatus {
public OnlyValidStatus(URL url) {
super(url);
}
@Override
public int read() throws IOException {
int code = super.read();
if (code >= 400) {
throw new RuntimeException("Unsuccessful HTTP code");
}
return code;
}
}Why is this wrong? It is very wrong because we risk breaking the logic of the entire parent class by overriding one of his methods. Remember, once we override the method read() in the child class, all methods from the parent class start to use his new version. We’re literally injecting a new “piece of implementation” right into the class. Philosophically speaking, this is an offense.
On the other hand, to extend a final class, you have to treat him like a black box and decorate him with your own implementation (a.k.a. Decorator Pattern):
final class OnlyValidStatus implements Status {
private final Status origin;
public OnlyValidStatus(Status status) {
this.origin = status;
}
@Override
public int read() throws IOException {
int code = this.origin.read();
if (code >= 400) {
throw new RuntimeException("Unsuccessful HTTP code");
}
return code;
}
}Make sure that this class is implementing the same interface as the original one: Status. The instance of HTTPStatus will be passed into him through the constructor and encapsulated. Then every call will be intercepted and implemented in a different way, if necessary. In this design, we treat the original object as a black box and never touch his internal logic.
If you don’t use that final keyword, anyone (including yourself) will be able to extend the class and… offend him :( So a class without final is a bad design.
An abstract class is the exact opposite case—he tells us that he is incomplete and we can’t use him “as is.” We have to inject our custom implementation logic into him, but only into the places he allows us to touch. These places are explicitly marked as abstract methods. For example, our HTTPStatus may look like this:
abstract class ValidatedHTTPStatus implements Status {
@Override
public final int read() throws IOException {
int code = this.origin.read();
if (!this.isValid()) {
throw new RuntimeException("Unsuccessful HTTP code");
}
return code;
}
protected abstract boolean isValid();
}As you see, the class doesn’t know how exactly to validate the HTTP code, and he expects us to inject that logic through inheritance and through overriding the method isValid(). We’re not going to offend him with this inheritance, since he defended all other methods with final (pay attention to the modifiers of his methods). Thus, the class is ready for our offense and is perfectly guarded against it.
To summarize, your class should either be final or abstract—nothing in between.
Update (April 2017): If you also agree that implementation inheritance is evil, all your classes must be final.
A library is essentially a set of functions that you can call, these days usually organized into classes.
Functions organized into classes? With all due respect, this is wrong. And it is a very common misconception of a class in object-oriented programming. Classes are not organizers of functions. And objects are not data structures.
So what is a “proper” object? Which one is not a proper one? What is the difference? Even though it is a very polemic subject, it is very important. Unless we understand what an object is, how can we write object-oriented software? Well, thanks to Java, Ruby, and others, we can. But how good will it be? Unfortunately, this is not an exact science, and there are many opinions. Here is my list of qualities of a good object.
Class vs. Object

Before we start talking about objects, let’s define what a class is. It is a place where objects are being born (a.k.a. instantiated). The main responsibility of a class is to construct new objects on demand and destruct them when they are not used anymore. A class knows how its children should look and how they should behave. In other words, it knows what contracts they should obey.
Sometimes I hear classes being called “object templates” (for example, Wikipedia says so). This definition is not correct because it places classes into a passive position. This definition assumes that someone will get a template and build an object by using it. This may be true, technically speaking, but conceptually it’s wrong. Nobody else should be involved—there are only a class and its children. An object asks a class to create another object, and the class constructs it; that’s it. Ruby expresses this concept much better than Java or C++:
photo = File.new('/tmp/photo.png')The object photo is constructed by the class File (new is an entry point to the class). Once constructed, the object is acting on its own. It shouldn’t know who constructed it and how many more brothers and sisters it has in the class. Yes, I mean that reflection is a terrible idea, but I’ll write more about it in one of the next posts :) Now, let’s talk about objects and their best and worst sides.
1. He Exists in Real Life

First of all, an object is a living organism. Moreover, an object should be anthropomorphized, i.e. treated like a human being (or a pet, if you like them more). By this I basically mean that an object is not a data structure or a collection of functions. Instead, it is an independent entity with its own life cycle, its own behavior, and its own habits.
An employee, a department, an HTTP request, a table in MySQL, a line in a file, or a file itself are proper objects—because they exist in real life, even when our software is turned off. To be more precise, an object is a representative of a real-life creature. It is a proxy of that real-life creature in front of all other objects. Without such a creature, there is—obviously—no object.
photo = File.new('/tmp/photo.png')
puts photo.width()In this example, I’m asking File to construct a new object photo, which will be a representative of a real file on disk. You may say that a file is also something virtual and exists only when the computer is turned on. I would agree and refine the definition of “real life” as follows: It is everything that exists aside from the scope of the program the object lives in. The disk file is outside the scope of our program; that’s why it is perfectly correct to create its representative inside the program.
A controller, a parser, a filter, a validator, a service locator, a singleton, or a factory are not good objects (yes, most GoF patterns are anti-patterns!). They don’t exist apart from your software, in real life. They are invented just to tie other objects together. They are artificial and fake creatures. They don’t represent anyone. Seriously, an XML parser—who does it represent? Nobody.
Some of them may become good if they change their names; others can never excuse their existence. For example, that XML parser can be renamed to “parseable XML” and start to represent an XML document that exists outside of our scope.
Always ask yourself, “What is the real-life entity behind my object?” If you can’t find an answer, start thinking about refactoring.
2. He Works by Contracts

A good object always works by contracts. He expects to be hired not because of his personal merits but because he obeys the contracts. On the other hand, when we hire an object, we shouldn’t discriminate and expect some specific object from a specific class to do the work for us. We should expect any object to do what our contract says. As long as the object does what we need, we should not be interested in his class of origin, his sex, or his religion.
For example, I need to show a photo on the screen. I want that photo to be read from a file in PNG format. I’m contracting an object from class DataFile and asking him to give me the binary content of that image.
But wait, do I care where exactly the content will come from—the file on disk, or an HTTP request, or maybe a document in Dropbox? Actually, I don’t. All I care about is that some object gives me a byte array with PNG content. So my contract would look like this:
interface Binary {
byte[] read();
}Now, any object from any class (not just DataFile) can work for me. All he has to do, in order to be eligible, is to obey the contract—by implementing the interface Binary.
The rule here is simple: every public method in a good object should implement his counterpart from an interface. If your object has public methods that are not inherited from any interface, he is badly designed.
There are two practical reasons for this. First, an object working without a contract is impossible to mock in a unit test. Second, a contract-less object is impossible to extend via decoration.
3. He Is Unique
A good object should always encapsulate something in order to be unique. If there is nothing to encapsulate, an object may have identical clones, which I believe is bad. Here is an example of a bad object, which may have clones:
class HTTPStatus implements Status {
private URL page = new URL("http://localhost");
@Override
public int read() throws IOException {
return HttpURLConnection.class.cast(
this.page.openConnection()
).getResponseCode();
}
}I can create a few instances of class HTTPStatus, and all of them will be equal to each other:
first = new HTTPStatus();
second = new HTTPStatus();
assert first.equals(second);Obviously utility classes, which have only static methods, can’t instantiate good objects. More generally, utility classes don’t have any of the merits mentioned in this article and can’t even be called “classes.” They are simply terrible abusers of an object paradigm and exist in modern object-oriented languages only because their inventors enabled static methods.
4. He Is Immutable
A good object should never change his encapsulated state. Remember, an object is a representative of a real-life entity, and this entity should stay the same through the entire life of the object. In other words, an object should never betray those whom he represents. He should never change owners. :)
Be aware that immutability doesn’t mean that all methods always return the same values. Instead, a good immutable object is very dynamic. However, he never changes his internal state. For example:
@Immutable
final class HTTPStatus implements Status {
private URL page;
public HTTPStatus(URL url) {
this.page = url;
}
@Override
public int read() throws IOException {
return HttpURLConnection.class.cast(
this.page.openConnection()
).getResponseCode();
}
}Even though the method read() may return different values, the object is immutable. He points to a certain web page and will never point anywhere else. He will never change his encapsulated state, and he will never betray the URL he represents.
Why is immutability a virtue? This article explains in detail: Objects Should Be Immutable. In a nutshell, immutable objects are better because:
- Immutable objects are simpler to construct, test, and use.
- Truly immutable objects are always thread-safe.
- They help avoid temporal coupling.
- Their usage is side-effect free (no defensive copies).
- They always have failure atomicity.
- They are much easier to cache.
- They prevent NULL references.
Of course, a good object doesn’t have setters, which may change his state and force him to betray the URL. In other words, introducing a setURL() method would be a terrible mistake in class HTTPStatus.
Besides all that, immutable objects will force you to make more cohesive, solid, and understandable designs, as this article explains: How Immutability Helps.
5. His Class Doesn’t Have Anything Static
A static method implements a behavior of a class, not an object. Let’s say we have class File, and his children have method size():
final class File implements Measurable {
@Override
public int size() {
// calculate the size of the file and return
}
}So far, so good; the method size() is there because of the contract Measurable, and every object of class File will be able to measure his size. A terrible mistake would be to design this class with a static method instead (this design is also known as a utility class and is very popular in Java, Ruby, and almost every OOP language):
// TERRIBLE DESIGN, DON'T USE!
class File {
public static int size(String file) {
// calculate the size of the file and return
}
}This design runs completely against the object-oriented paradigm. Why? Because static methods turn object-oriented programming into “class-oriented” programming. This method, size(), exposes the behavior of the class, not of his objects. What’s wrong with this, you may ask? Why can’t we have both objects and classes as first-class citizens in our code? Why can’t both of them have methods and properties?
The problem is that with class-oriented programming, decomposition doesn’t work anymore. We can’t break down a complex problem into parts, because only a single instance of a class exists in the entire program. The power of OOP is that it allows us to use objects as an instrument for scope decomposition. When I instantiate an object inside a method, he is dedicated to my specific task. He is perfectly isolated from all other objects around the method. This object is a local variable in the scope of the method. A class, with his static methods, is always a global variable no matter where I use him. Because of that, I can’t isolate my interaction with this variable from others.
Besides being conceptually against object-oriented principles, public static methods have a few practical drawbacks:
First, it’s impossible to mock them (Well, you can use PowerMock, but this will then be the most terrible decision you could make in a Java project… I made it once, a few years ago).
Second, they are not thread-safe by definition, because they always work with static variables, which are accessible from all threads. You can make them thread-safe, but this will always require explicit synchronization.
Every time you see a public static method, start rewriting immediately. I don’t even want to mention how terrible static (or global) variables are. I think it is just obvious.
6. His Name Is Not a Job Title

The name of an object should tell us what this object is, not what it does, just like we name objects in real life: book instead of page aggregator, cup instead of water holder, T-shirt instead of body dresser. There are exceptions, of course, like printer or computer, but they were invented just recently and by those who didn’t read this article. :)
For example, these names tell us who their owners are: an apple, a file, a series of HTTP requests, a socket, an XML document, a list of users, a regular expression, an integer, a PostgreSQL table, or Jeffrey Lebowski. A properly named object is always possible to draw as a small picture. Even a regular expression can be drawn.
In the opposite, here is an example of names that tell us what their owners do: a file reader, a text parser, a URL validator, an XML printer, a service locator, a singleton, a script runner, or a Java programmer. Can you draw any of them? No, you can’t. These names are not suitable for good objects. They are terrible names that lead to terrible design.
In general, avoid names that end with “-er”—most of them are bad.
“What is the alternative of a FileReader?” I hear you asking. What would be a better name? Let’s see. We already have File, which is a representative of a real-world file on disk. This representative is not powerful enough for us, because he doesn’t know how to read the content of the file. We want to create a more powerful one that will have that ability. What would we call him? Remember, the name should say what he is, not what he does. What is he? He is a file that has data; not just a file, like File, but a more sophisticated one, with data. So how about FileWithData or simply DataFile?
The same logic should be applicable to all other names. Always think about what it is rather than what it does. Give your objects real, meaningful names instead of job titles.
More about this in Don’t Create Objects That End With -ER.
7. His Class Is Either Final or Abstract

A good object comes from either a final or abstract class. A final class is one that can’t be extended via inheritance. An abstract class is one that can’t have instances. Simply put, a class should either say, “You can never break me; I’m a black box for you” or “I’m broken already; fix me first and then use.”
There is nothing in between. A final class is a black box that you can’t modify by any means. He works as he works, and you either use him or throw him away. You can’t create another class that will inherit his properties. This is not allowed because of that final modifier. The only way to extend such a final class is through decoration of his children. Let’s say I have the class HTTPStatus (see above), and I don’t like him. Well, I like him, but he’s not powerful enough for me. I want him to throw an exception if HTTP status is over 400. I want his method, read(), to do more that it does now. A traditional way would be to extend the class and overwrite his method:
class OnlyValidStatus extends HTTPStatus {
public OnlyValidStatus(URL url) {
super(url);
}
@Override
public int read() throws IOException {
int code = super.read();
if (code >= 400) {
throw new RuntimeException("Unsuccessful HTTP code");
}
return code;
}
}Why is this wrong? It is very wrong because we risk breaking the logic of the entire parent class by overriding one of his methods. Remember, once we override the method read() in the child class, all methods from the parent class start to use his new version. We’re literally injecting a new “piece of implementation” right into the class. Philosophically speaking, this is an offense.
On the other hand, to extend a final class, you have to treat him like a black box and decorate him with your own implementation (a.k.a. Decorator Pattern):
final class OnlyValidStatus implements Status {
private final Status origin;
public OnlyValidStatus(Status status) {
this.origin = status;
}
@Override
public int read() throws IOException {
int code = this.origin.read();
if (code >= 400) {
throw new RuntimeException("Unsuccessful HTTP code");
}
return code;
}
}Make sure that this class is implementing the same interface as the original one: Status. The instance of HTTPStatus will be passed into him through the constructor and encapsulated. Then every call will be intercepted and implemented in a different way, if necessary. In this design, we treat the original object as a black box and never touch his internal logic.
If you don’t use that final keyword, anyone (including yourself) will be able to extend the class and… offend him :( So a class without final is a bad design.
An abstract class is the exact opposite case—he tells us that he is incomplete and we can’t use him “as is.” We have to inject our custom implementation logic into him, but only into the places he allows us to touch. These places are explicitly marked as abstract methods. For example, our HTTPStatus may look like this:
abstract class ValidatedHTTPStatus implements Status {
@Override
public final int read() throws IOException {
int code = this.origin.read();
if (!this.isValid()) {
throw new RuntimeException("Unsuccessful HTTP code");
}
return code;
}
protected abstract boolean isValid();
}As you see, the class doesn’t know how exactly to validate the HTTP code, and he expects us to inject that logic through inheritance and through overriding the method isValid(). We’re not going to offend him with this inheritance, since he defended all other methods with final (pay attention to the modifiers of his methods). Thus, the class is ready for our offense and is perfectly guarded against it.
To summarize, your class should either be final or abstract—nothing in between.
Update (April 2017): If you also agree that implementation inheritance is evil, all your classes must be final.
https://www.yegor256.com/2014/11/20/seven-virtues-of-good-object.html
Seven Virtues of a Good Object
- Yegor Bugayenko
- comments
- Translated:
- Japanese
- Spanish
- Russian
- add yours!
- Discussed at:
Martin Fowler says:
A library is essentially a set of functions that you can call, these days usually organized into classes.
Functions organized into classes? With all due respect, this is wrong. And it is a very common misconception of a class in object-oriented programming. Classes are not organizers of functions. And objects are not data structures.
So what is a “proper” object? Which one is not a proper one? What is the difference? Even though it is a very polemic subject, it is very important. Unless we understand what an object is, how can we write object-oriented software? Well, thanks to Java, Ruby, and others, we can. But how good will it be? Unfortunately, this is not an exact science, and there are many opinions. Here is my list of qualities of a good object.
Class vs. Object

Before we start talking about objects, let’s define what a class is. It is a place where objects are being born (a.k.a. instantiated). The main responsibility of a class is to construct new objects on demand and destruct them when they are not used anymore. A class knows how its children should look and how they should behave. In other words, it knows what contracts they should obey.
Sometimes I hear classes being called “object templates” (for example, Wikipedia says so). This definition is not correct because it places classes into a passive position. This definition assumes that someone will get a template and build an object by using it. This may be true, technically speaking, but conceptually it’s wrong. Nobody else should be involved—there are only a class and its children. An object asks a class to create another object, and the class constructs it; that’s it. Ruby expresses this concept much better than Java or C++:
photo = File.new('/tmp/photo.png')The object photo is constructed by the class File (new is an entry point to the class). Once constructed, the object is acting on its own. It shouldn’t know who constructed it and how many more brothers and sisters it has in the class. Yes, I mean that reflection is a terrible idea, but I’ll write more about it in one of the next posts :) Now, let’s talk about objects and their best and worst sides.
1. He Exists in Real Life

First of all, an object is a living organism. Moreover, an object should be anthropomorphized, i.e. treated like a human being (or a pet, if you like them more). By this I basically mean that an object is not a data structure or a collection of functions. Instead, it is an independent entity with its own life cycle, its own behavior, and its own habits.
An employee, a department, an HTTP request, a table in MySQL, a line in a file, or a file itself are proper objects—because they exist in real life, even when our software is turned off. To be more precise, an object is a representative of a real-life creature. It is a proxy of that real-life creature in front of all other objects. Without such a creature, there is—obviously—no object.
photo = File.new('/tmp/photo.png')
puts photo.width()In this example, I’m asking File to construct a new object photo, which will be a representative of a real file on disk. You may say that a file is also something virtual and exists only when the computer is turned on. I would agree and refine the definition of “real life” as follows: It is everything that exists aside from the scope of the program the object lives in. The disk file is outside the scope of our program; that’s why it is perfectly correct to create its representative inside the program.
A controller, a parser, a filter, a validator, a service locator, a singleton, or a factory are not good objects (yes, most GoF patterns are anti-patterns!). They don’t exist apart from your software, in real life. They are invented just to tie other objects together. They are artificial and fake creatures. They don’t represent anyone. Seriously, an XML parser—who does it represent? Nobody.
Some of them may become good if they change their names; others can never excuse their existence. For example, that XML parser can be renamed to “parseable XML” and start to represent an XML document that exists outside of our scope.
Always ask yourself, “What is the real-life entity behind my object?” If you can’t find an answer, start thinking about refactoring.
2. He Works by Contracts

A good object always works by contracts. He expects to be hired not because of his personal merits but because he obeys the contracts. On the other hand, when we hire an object, we shouldn’t discriminate and expect some specific object from a specific class to do the work for us. We should expect any object to do what our contract says. As long as the object does what we need, we should not be interested in his class of origin, his sex, or his religion.
For example, I need to show a photo on the screen. I want that photo to be read from a file in PNG format. I’m contracting an object from class DataFile and asking him to give me the binary content of that image.
But wait, do I care where exactly the content will come from—the file on disk, or an HTTP request, or maybe a document in Dropbox? Actually, I don’t. All I care about is that some object gives me a byte array with PNG content. So my contract would look like this:
interface Binary {
byte[] read();
}Now, any object from any class (not just DataFile) can work for me. All he has to do, in order to be eligible, is to obey the contract—by implementing the interface Binary.
The rule here is simple: every public method in a good object should implement his counterpart from an interface. If your object has public methods that are not inherited from any interface, he is badly designed.
There are two practical reasons for this. First, an object working without a contract is impossible to mock in a unit test. Second, a contract-less object is impossible to extend via decoration.
3. He Is Unique
A good object should always encapsulate something in order to be unique. If there is nothing to encapsulate, an object may have identical clones, which I believe is bad. Here is an example of a bad object, which may have clones:
class HTTPStatus implements Status {
private URL page = new URL("http://localhost");
@Override
public int read() throws IOException {
return HttpURLConnection.class.cast(
this.page.openConnection()
).getResponseCode();
}
}I can create a few instances of class HTTPStatus, and all of them will be equal to each other:
first = new HTTPStatus();
second = new HTTPStatus();
assert first.equals(second);Obviously utility classes, which have only static methods, can’t instantiate good objects. More generally, utility classes don’t have any of the merits mentioned in this article and can’t even be called “classes.” They are simply terrible abusers of an object paradigm and exist in modern object-oriented languages only because their inventors enabled static methods.
4. He Is Immutable
A good object should never change his encapsulated state. Remember, an object is a representative of a real-life entity, and this entity should stay the same through the entire life of the object. In other words, an object should never betray those whom he represents. He should never change owners. :)
Be aware that immutability doesn’t mean that all methods always return the same values. Instead, a good immutable object is very dynamic. However, he never changes his internal state. For example:
@Immutable
final class HTTPStatus implements Status {
private URL page;
public HTTPStatus(URL url) {
this.page = url;
}
@Override
public int read() throws IOException {
return HttpURLConnection.class.cast(
this.page.openConnection()
).getResponseCode();
}
}Even though the method read() may return different values, the object is immutable. He points to a certain web page and will never point anywhere else. He will never change his encapsulated state, and he will never betray the URL he represents.
Why is immutability a virtue? This article explains in detail: Objects Should Be Immutable. In a nutshell, immutable objects are better because:
- Immutable objects are simpler to construct, test, and use.
- Truly immutable objects are always thread-safe.
- They help avoid temporal coupling.
- Their usage is side-effect free (no defensive copies).
- They always have failure atomicity.
- They are much easier to cache.
- They prevent NULL references.
Of course, a good object doesn’t have setters, which may change his state and force him to betray the URL. In other words, introducing a setURL() method would be a terrible mistake in class HTTPStatus.
Besides all that, immutable objects will force you to make more cohesive, solid, and understandable designs, as this article explains: How Immutability Helps.
5. His Class Doesn’t Have Anything Static
A static method implements a behavior of a class, not an object. Let’s say we have class File, and his children have method size():
final class File implements Measurable {
@Override
public int size() {
// calculate the size of the file and return
}
}So far, so good; the method size() is there because of the contract Measurable, and every object of class File will be able to measure his size. A terrible mistake would be to design this class with a static method instead (this design is also known as a utility class and is very popular in Java, Ruby, and almost every OOP language):
// TERRIBLE DESIGN, DON'T USE!
class File {
public static int size(String file) {
// calculate the size of the file and return
}
}This design runs completely against the object-oriented paradigm. Why? Because static methods turn object-oriented programming into “class-oriented” programming. This method, size(), exposes the behavior of the class, not of his objects. What’s wrong with this, you may ask? Why can’t we have both objects and classes as first-class citizens in our code? Why can’t both of them have methods and properties?
The problem is that with class-oriented programming, decomposition doesn’t work anymore. We can’t break down a complex problem into parts, because only a single instance of a class exists in the entire program. The power of OOP is that it allows us to use objects as an instrument for scope decomposition. When I instantiate an object inside a method, he is dedicated to my specific task. He is perfectly isolated from all other objects around the method. This object is a local variable in the scope of the method. A class, with his static methods, is always a global variable no matter where I use him. Because of that, I can’t isolate my interaction with this variable from others.
Besides being conceptually against object-oriented principles, public static methods have a few practical drawbacks:
First, it’s impossible to mock them (Well, you can use PowerMock, but this will then be the most terrible decision you could make in a Java project… I made it once, a few years ago).
Second, they are not thread-safe by definition, because they always work with static variables, which are accessible from all threads. You can make them thread-safe, but this will always require explicit synchronization.
Every time you see a public static method, start rewriting immediately. I don’t even want to mention how terrible static (or global) variables are. I think it is just obvious.
6. His Name Is Not a Job Title

The name of an object should tell us what this object is, not what it does, just like we name objects in real life: book instead of page aggregator, cup instead of water holder, T-shirt instead of body dresser. There are exceptions, of course, like printer or computer, but they were invented just recently and by those who didn’t read this article. :)
For example, these names tell us who their owners are: an apple, a file, a series of HTTP requests, a socket, an XML document, a list of users, a regular expression, an integer, a PostgreSQL table, or Jeffrey Lebowski. A properly named object is always possible to draw as a small picture. Even a regular expression can be drawn.
In the opposite, here is an example of names that tell us what their owners do: a file reader, a text parser, a URL validator, an XML printer, a service locator, a singleton, a script runner, or a Java programmer. Can you draw any of them? No, you can’t. These names are not suitable for good objects. They are terrible names that lead to terrible design.
In general, avoid names that end with “-er”—most of them are bad.
“What is the alternative of a FileReader?” I hear you asking. What would be a better name? Let’s see. We already have File, which is a representative of a real-world file on disk. This representative is not powerful enough for us, because he doesn’t know how to read the content of the file. We want to create a more powerful one that will have that ability. What would we call him? Remember, the name should say what he is, not what he does. What is he? He is a file that has data; not just a file, like File, but a more sophisticated one, with data. So how about FileWithData or simply DataFile?
The same logic should be applicable to all other names. Always think about what it is rather than what it does. Give your objects real, meaningful names instead of job titles.
More about this in Don’t Create Objects That End With -ER.
7. His Class Is Either Final or Abstract

A good object comes from either a final or abstract class. A final class is one that can’t be extended via inheritance. An abstract class is one that can’t have instances. Simply put, a class should either say, “You can never break me; I’m a black box for you” or “I’m broken already; fix me first and then use.”
There is nothing in between. A final class is a black box that you can’t modify by any means. He works as he works, and you either use him or throw him away. You can’t create another class that will inherit his properties. This is not allowed because of that final modifier. The only way to extend such a final class is through decoration of his children. Let’s say I have the class HTTPStatus (see above), and I don’t like him. Well, I like him, but he’s not powerful enough for me. I want him to throw an exception if HTTP status is over 400. I want his method, read(), to do more that it does now. A traditional way would be to extend the class and overwrite his method:
class OnlyValidStatus extends HTTPStatus {
public OnlyValidStatus(URL url) {
super(url);
}
@Override
public int read() throws IOException {
int code = super.read();
if (code >= 400) {
throw new RuntimeException("Unsuccessful HTTP code");
}
return code;
}
}Why is this wrong? It is very wrong because we risk breaking the logic of the entire parent class by overriding one of his methods. Remember, once we override the method read() in the child class, all methods from the parent class start to use his new version. We’re literally injecting a new “piece of implementation” right into the class. Philosophically speaking, this is an offense.
On the other hand, to extend a final class, you have to treat him like a black box and decorate him with your own implementation (a.k.a. Decorator Pattern):
final class OnlyValidStatus implements Status {
private final Status origin;
public OnlyValidStatus(Status status) {
this.origin = status;
}
@Override
public int read() throws IOException {
int code = this.origin.read();
if (code >= 400) {
throw new RuntimeException("Unsuccessful HTTP code");
}
return code;
}
}Make sure that this class is implementing the same interface as the original one: Status. The instance of HTTPStatus will be passed into him through the constructor and encapsulated. Then every call will be intercepted and implemented in a different way, if necessary. In this design, we treat the original object as a black box and never touch his internal logic.
If you don’t use that final keyword, anyone (including yourself) will be able to extend the class and… offend him :( So a class without final is a bad design.
An abstract class is the exact opposite case—he tells us that he is incomplete and we can’t use him “as is.” We have to inject our custom implementation logic into him, but only into the places he allows us to touch. These places are explicitly marked as abstract methods. For example, our HTTPStatus may look like this:
abstract class ValidatedHTTPStatus implements Status {
@Override
public final int read() throws IOException {
int code = this.origin.read();
if (!this.isValid()) {
throw new RuntimeException("Unsuccessful HTTP code");
}
return code;
}
protected abstract boolean isValid();
}As you see, the class doesn’t know how exactly to validate the HTTP code, and he expects us to inject that logic through inheritance and through overriding the method isValid(). We’re not going to offend him with this inheritance, since he defended all other methods with final (pay attention to the modifiers of his methods). Thus, the class is ready for our offense and is perfectly guarded against it.
To summarize, your class should either be final or abstract—nothing in between.
Update (April 2017): If you also agree that implementation inheritance is evil, all your classes must be final.
Martin Fowler says:
A library is essentially a set of functions that you can call, these days usually organized into classes.
Functions organized into classes? With all due respect, this is wrong. And it is a very common misconception of a class in object-oriented programming. Classes are not organizers of functions. And objects are not data structures.
So what is a “proper” object? Which one is not a proper one? What is the difference? Even though it is a very polemic subject, it is very important. Unless we understand what an object is, how can we write object-oriented software? Well, thanks to Java, Ruby, and others, we can. But how good will it be? Unfortunately, this is not an exact science, and there are many opinions. Here is my list of qualities of a good object.
Class vs. Object

Before we start talking about objects, let’s define what a class is. It is a place where objects are being born (a.k.a. instantiated). The main responsibility of a class is to construct new objects on demand and destruct them when they are not used anymore. A class knows how its children should look and how they should behave. In other words, it knows what contracts they should obey.
Sometimes I hear classes being called “object templates” (for example, Wikipedia says so). This definition is not correct because it places classes into a passive position. This definition assumes that someone will get a template and build an object by using it. This may be true, technically speaking, but conceptually it’s wrong. Nobody else should be involved—there are only a class and its children. An object asks a class to create another object, and the class constructs it; that’s it. Ruby expresses this concept much better than Java or C++:
photo = File.new('/tmp/photo.png')The object photo is constructed by the class File (new is an entry point to the class). Once constructed, the object is acting on its own. It shouldn’t know who constructed it and how many more brothers and sisters it has in the class. Yes, I mean that reflection is a terrible idea, but I’ll write more about it in one of the next posts :) Now, let’s talk about objects and their best and worst sides.
1. He Exists in Real Life

First of all, an object is a living organism. Moreover, an object should be anthropomorphized, i.e. treated like a human being (or a pet, if you like them more). By this I basically mean that an object is not a data structure or a collection of functions. Instead, it is an independent entity with its own life cycle, its own behavior, and its own habits.
An employee, a department, an HTTP request, a table in MySQL, a line in a file, or a file itself are proper objects—because they exist in real life, even when our software is turned off. To be more precise, an object is a representative of a real-life creature. It is a proxy of that real-life creature in front of all other objects. Without such a creature, there is—obviously—no object.
photo = File.new('/tmp/photo.png')
puts photo.width()In this example, I’m asking File to construct a new object photo, which will be a representative of a real file on disk. You may say that a file is also something virtual and exists only when the computer is turned on. I would agree and refine the definition of “real life” as follows: It is everything that exists aside from the scope of the program the object lives in. The disk file is outside the scope of our program; that’s why it is perfectly correct to create its representative inside the program.
A controller, a parser, a filter, a validator, a service locator, a singleton, or a factory are not good objects (yes, most GoF patterns are anti-patterns!). They don’t exist apart from your software, in real life. They are invented just to tie other objects together. They are artificial and fake creatures. They don’t represent anyone. Seriously, an XML parser—who does it represent? Nobody.
Some of them may become good if they change their names; others can never excuse their existence. For example, that XML parser can be renamed to “parseable XML” and start to represent an XML document that exists outside of our scope.
Always ask yourself, “What is the real-life entity behind my object?” If you can’t find an answer, start thinking about refactoring.
2. He Works by Contracts

A good object always works by contracts. He expects to be hired not because of his personal merits but because he obeys the contracts. On the other hand, when we hire an object, we shouldn’t discriminate and expect some specific object from a specific class to do the work for us. We should expect any object to do what our contract says. As long as the object does what we need, we should not be interested in his class of origin, his sex, or his religion.
For example, I need to show a photo on the screen. I want that photo to be read from a file in PNG format. I’m contracting an object from class DataFile and asking him to give me the binary content of that image.
But wait, do I care where exactly the content will come from—the file on disk, or an HTTP request, or maybe a document in Dropbox? Actually, I don’t. All I care about is that some object gives me a byte array with PNG content. So my contract would look like this:
interface Binary {
byte[] read();
}Now, any object from any class (not just DataFile) can work for me. All he has to do, in order to be eligible, is to obey the contract—by implementing the interface Binary.
The rule here is simple: every public method in a good object should implement his counterpart from an interface. If your object has public methods that are not inherited from any interface, he is badly designed.
There are two practical reasons for this. First, an object working without a contract is impossible to mock in a unit test. Second, a contract-less object is impossible to extend via decoration.
3. He Is Unique
A good object should always encapsulate something in order to be unique. If there is nothing to encapsulate, an object may have identical clones, which I believe is bad. Here is an example of a bad object, which may have clones:
class HTTPStatus implements Status {
private URL page = new URL("http://localhost");
@Override
public int read() throws IOException {
return HttpURLConnection.class.cast(
this.page.openConnection()
).getResponseCode();
}
}I can create a few instances of class HTTPStatus, and all of them will be equal to each other:
first = new HTTPStatus();
second = new HTTPStatus();
assert first.equals(second);Obviously utility classes, which have only static methods, can’t instantiate good objects. More generally, utility classes don’t have any of the merits mentioned in this article and can’t even be called “classes.” They are simply terrible abusers of an object paradigm and exist in modern object-oriented languages only because their inventors enabled static methods.
4. He Is Immutable
A good object should never change his encapsulated state. Remember, an object is a representative of a real-life entity, and this entity should stay the same through the entire life of the object. In other words, an object should never betray those whom he represents. He should never change owners. :)
Be aware that immutability doesn’t mean that all methods always return the same values. Instead, a good immutable object is very dynamic. However, he never changes his internal state. For example:
@Immutable
final class HTTPStatus implements Status {
private URL page;
public HTTPStatus(URL url) {
this.page = url;
}
@Override
public int read() throws IOException {
return HttpURLConnection.class.cast(
this.page.openConnection()
).getResponseCode();
}
}Even though the method read() may return different values, the object is immutable. He points to a certain web page and will never point anywhere else. He will never change his encapsulated state, and he will never betray the URL he represents.
Why is immutability a virtue? This article explains in detail: Objects Should Be Immutable. In a nutshell, immutable objects are better because:
- Immutable objects are simpler to construct, test, and use.
- Truly immutable objects are always thread-safe.
- They help avoid temporal coupling.
- Their usage is side-effect free (no defensive copies).
- They always have failure atomicity.
- They are much easier to cache.
- They prevent NULL references.
Of course, a good object doesn’t have setters, which may change his state and force him to betray the URL. In other words, introducing a setURL() method would be a terrible mistake in class HTTPStatus.
Besides all that, immutable objects will force you to make more cohesive, solid, and understandable designs, as this article explains: How Immutability Helps.
5. His Class Doesn’t Have Anything Static
A static method implements a behavior of a class, not an object. Let’s say we have class File, and his children have method size():
final class File implements Measurable {
@Override
public int size() {
// calculate the size of the file and return
}
}So far, so good; the method size() is there because of the contract Measurable, and every object of class File will be able to measure his size. A terrible mistake would be to design this class with a static method instead (this design is also known as a utility class and is very popular in Java, Ruby, and almost every OOP language):
// TERRIBLE DESIGN, DON'T USE!
class File {
public static int size(String file) {
// calculate the size of the file and return
}
}This design runs completely against the object-oriented paradigm. Why? Because static methods turn object-oriented programming into “class-oriented” programming. This method, size(), exposes the behavior of the class, not of his objects. What’s wrong with this, you may ask? Why can’t we have both objects and classes as first-class citizens in our code? Why can’t both of them have methods and properties?
The problem is that with class-oriented programming, decomposition doesn’t work anymore. We can’t break down a complex problem into parts, because only a single instance of a class exists in the entire program. The power of OOP is that it allows us to use objects as an instrument for scope decomposition. When I instantiate an object inside a method, he is dedicated to my specific task. He is perfectly isolated from all other objects around the method. This object is a local variable in the scope of the method. A class, with his static methods, is always a global variable no matter where I use him. Because of that, I can’t isolate my interaction with this variable from others.
Besides being conceptually against object-oriented principles, public static methods have a few practical drawbacks:
First, it’s impossible to mock them (Well, you can use PowerMock, but this will then be the most terrible decision you could make in a Java project… I made it once, a few years ago).
Second, they are not thread-safe by definition, because they always work with static variables, which are accessible from all threads. You can make them thread-safe, but this will always require explicit synchronization.
Every time you see a public static method, start rewriting immediately. I don’t even want to mention how terrible static (or global) variables are. I think it is just obvious.
6. His Name Is Not a Job Title

The name of an object should tell us what this object is, not what it does, just like we name objects in real life: book instead of page aggregator, cup instead of water holder, T-shirt instead of body dresser. There are exceptions, of course, like printer or computer, but they were invented just recently and by those who didn’t read this article. :)
For example, these names tell us who their owners are: an apple, a file, a series of HTTP requests, a socket, an XML document, a list of users, a regular expression, an integer, a PostgreSQL table, or Jeffrey Lebowski. A properly named object is always possible to draw as a small picture. Even a regular expression can be drawn.
In the opposite, here is an example of names that tell us what their owners do: a file reader, a text parser, a URL validator, an XML printer, a service locator, a singleton, a script runner, or a Java programmer. Can you draw any of them? No, you can’t. These names are not suitable for good objects. They are terrible names that lead to terrible design.
In general, avoid names that end with “-er”—most of them are bad.
“What is the alternative of a FileReader?” I hear you asking. What would be a better name? Let’s see. We already have File, which is a representative of a real-world file on disk. This representative is not powerful enough for us, because he doesn’t know how to read the content of the file. We want to create a more powerful one that will have that ability. What would we call him? Remember, the name should say what he is, not what he does. What is he? He is a file that has data; not just a file, like File, but a more sophisticated one, with data. So how about FileWithData or simply DataFile?
The same logic should be applicable to all other names. Always think about what it is rather than what it does. Give your objects real, meaningful names instead of job titles.
More about this in Don’t Create Objects That End With -ER.
7. His Class Is Either Final or Abstract

A good object comes from either a final or abstract class. A final class is one that can’t be extended via inheritance. An abstract class is one that can’t have instances. Simply put, a class should either say, “You can never break me; I’m a black box for you” or “I’m broken already; fix me first and then use.”
There is nothing in between. A final class is a black box that you can’t modify by any means. He works as he works, and you either use him or throw him away. You can’t create another class that will inherit his properties. This is not allowed because of that final modifier. The only way to extend such a final class is through decoration of his children. Let’s say I have the class HTTPStatus (see above), and I don’t like him. Well, I like him, but he’s not powerful enough for me. I want him to throw an exception if HTTP status is over 400. I want his method, read(), to do more that it does now. A traditional way would be to extend the class and overwrite his method:
class OnlyValidStatus extends HTTPStatus {
public OnlyValidStatus(URL url) {
super(url);
}
@Override
public int read() throws IOException {
int code = super.read();
if (code >= 400) {
throw new RuntimeException("Unsuccessful HTTP code");
}
return code;
}
}Why is this wrong? It is very wrong because we risk breaking the logic of the entire parent class by overriding one of his methods. Remember, once we override the method read() in the child class, all methods from the parent class start to use his new version. We’re literally injecting a new “piece of implementation” right into the class. Philosophically speaking, this is an offense.
On the other hand, to extend a final class, you have to treat him like a black box and decorate him with your own implementation (a.k.a. Decorator Pattern):
final class OnlyValidStatus implements Status {
private final Status origin;
public OnlyValidStatus(Status status) {
this.origin = status;
}
@Override
public int read() throws IOException {
int code = this.origin.read();
if (code >= 400) {
throw new RuntimeException("Unsuccessful HTTP code");
}
return code;
}
}Make sure that this class is implementing the same interface as the original one: Status. The instance of HTTPStatus will be passed into him through the constructor and encapsulated. Then every call will be intercepted and implemented in a different way, if necessary. In this design, we treat the original object as a black box and never touch his internal logic.
If you don’t use that final keyword, anyone (including yourself) will be able to extend the class and… offend him :( So a class without final is a bad design.
An abstract class is the exact opposite case—he tells us that he is incomplete and we can’t use him “as is.” We have to inject our custom implementation logic into him, but only into the places he allows us to touch. These places are explicitly marked as abstract methods. For example, our HTTPStatus may look like this:
abstract class ValidatedHTTPStatus implements Status {
@Override
public final int read() throws IOException {
int code = this.origin.read();
if (!this.isValid()) {
throw new RuntimeException("Unsuccessful HTTP code");
}
return code;
}
protected abstract boolean isValid();
}As you see, the class doesn’t know how exactly to validate the HTTP code, and he expects us to inject that logic through inheritance and through overriding the method isValid(). We’re not going to offend him with this inheritance, since he defended all other methods with final (pay attention to the modifiers of his methods). Thus, the class is ready for our offense and is perfectly guarded against it.
To summarize, your class should either be final or abstract—nothing in between.
Update (April 2017): If you also agree that implementation inheritance is evil, all your classes must be final.
Please, use syntax highlighting in your comments, to make them more readable.
set) evil. My argumentation was based mostly on metaphors and abstract examples. Apparently, this wasn’t convincing enough for many of you—I received a few requests asking to provide more specific and practical examples.Thus, in order to illustrate my strongly negative attitude to “mutability via setters,” I took an existing commons-email Java library from Apache and re-designed it my way, without setters and with “object thinking” in mind. I released my library as part of the jcabi family—jcabi-email. Let’s see what benefits we get from a “pure” object-oriented and immutable approach, without getters.
Here is how your code will look, if you send an email using commons-email:
Email email = new SimpleEmail();
email.setHostName("smtp.googlemail.com");
email.setSmtpPort(465);
email.setAuthenticator(new DefaultAuthenticator("user", "pwd"));
email.setFrom("yegor256@gmail.com", "Yegor Bugayenko");
email.addTo("dude@jcabi.com");
email.setSubject("how are you?");
email.setMsg("Dude, how are you?");
email.send();Here is how you do the same with jcabi-email:
Postman postman = new Postman.Default(
new SMTP("smtp.googlemail.com", 465, "user", "pwd")
);
Envelope envelope = new Envelope.MIME(
new Array<Stamp>(
new StSender("Yegor Bugayenko <yegor256@gmail.com>"),
new StRecipient("dude@jcabi.com"),
new StSubject("how are you?")
),
new Array<Enclosure>(
new EnPlain("Dude, how are you?")
)
);
postman.send(envelope);I think the difference is obvious.
In the first example, you’re dealing with a monster class that can do everything for you, including sending your MIME message via SMTP, creating the message, configuring its parameters, adding MIME parts to it, etc. The Email class from commons-email is really a huge class—33 private properties, over a hundred methods, about two thousands lines of code. First, you configure the class through a bunch of setters and then you ask it to send() an email for you.
In the second example, we have seven objects instantiated via seven new calls. Postman is responsible for packaging a MIME message; SMTP is responsible for sending it via SMTP; stamps (StSender, StRecipient, and StSubject) are responsible for configuring the MIME message before delivery; enclosure EnPlain is responsible for creating a MIME part for the message we’re going to send. We construct these seven objects, encapsulating one into another, and then we ask the postman to send() the envelope for us.
What’s Wrong With a Mutable Email?
From a user perspective, there is almost nothing wrong. Email is a powerful class with multiple controls—just hit the right one and the job gets done. However, from a developer perspective Email class is a nightmare. Mostly because the class is very big and difficult to maintain.
Because the class is so big, every time you want to extend it by introducing a new method, you’re facing the fact that you’re making the class even worse—longer, less cohesive, less readable, less maintainable, etc. You have a feeling that you’re digging into something dirty and that there is no hope to make it cleaner, ever. I’m sure, you’re familiar with this feeling—most legacy applications look that way. They have huge multi-line “classes” (in reality, COBOL programs written in Java) that were inherited from a few generations of programmers before you. When you start, you’re full of energy, but after a few minutes of scrolling such a “class” you say—“screw it, it’s almost Saturday.”
Because the class is so big, there is no data hiding or encapsulation any more—33 variables are accessible by over 100 methods. What is hidden? This Email.java file in reality is a big, procedural 2000-line script, called a “class” by mistake. Nothing is hidden, once you cross the border of the class by calling one of its methods. After that, you have full access to all the data you may need. Why is this bad? Well, why do we need encapsulation in the first place? In order to protect one programmer from another, aka defensive programming. While I’m busy changing the subject of the MIME message, I want to be sure that I’m not interfered with by some other method’s activity, that is changing a sender and touching my subject by mistake. Encapsulation helps us narrow down the scope of the problem, while this Email class is doing exactly the opposite.
Because the class is so big, its unit testing is even more complicated than the class itself. Why? Because of multiple inter-dependencies between its methods and properties. In order to test setCharset() you have to prepare the entire object by calling a few other methods, then you have to call send() to make sure the message being sent actually uses the encoding you specified. Thus, in order to test a one-line method setCharset() you run the entire integration testing scenario of sending a full MIME message through SMTP. Obviously, if something gets changed in one of the methods, almost every test method will be affected. In other words, tests are very fragile, unreliable and over-complicated.
I can go on and on with this “because the class is so big,” but I think it is obvious that a small, cohesive class is always better than a big one. It is obvious to me, to you, and to any object-oriented programmer. But why is it not so obvious to the developers of Apache Commons Email? I don’t think they are stupid or un-educated. What is it then?
How and Why Did It Happen?
This is how it always happens. You start to design a class as something cohesive, solid, and small. Your intentions are very positive. Very soon you realize that there is something else that this class has to do. Then, something else. Then, even more.
The best way to make your class more and more powerful is by adding setters that inject configuration parameters into the class so that it can process them inside, isn’t it?
This is the root cause of the problem! The root cause is our ability to insert data into mutable objects via configuration methods, also known as “setters.” When an object is mutable and allows us to add setters whenever we want, we will do it without limits.
Let me put it this way—_mutable classes tend to grow in size and lose cohesiveness_.
If commons-email authors made this Email class immutable in the beginning, they wouldn’t have been able to add so many methods into it and encapsulate so many properties. They wouldn’t be able to turn it into a monster. Why? Because an immutable object only accepts a state through a constructor. Can you imagine a 33-argument constructor? Of course, not.
When you make your class immutable in the first place, you are forced to keep it cohesive, small, solid and robust. Because you can’t encapsulate too much and you can’t modify what’s encapsulated. Just two or three arguments of a constructor and you’re done.
How Did I Design An Immutable Email?
When I was designing jcabi-email I started with a small and simple class: Postman. Well, it is an interface, since I never make interface-less classes. So, Postman is… a post man. He is delivering messages to other people. First, I created a default version of it (I omit the ctor, for the sake of brevity):
import javax.mail.Message;
@Immutable
class Postman.Default implements Postman {
private final String host;
private final int port;
private final String user;
private final String password;
@Override
void send(Message msg) {
// create SMTP session
// create transport
// transport.connect(this.host, this.port, etc.)
// transport.send(msg)
// transport.close();
}
}Good start, it works. What now? Well, the Message is difficult to construct. It is a complex class from JDK that requires some manipulations before it can become a nice HTML email. So I created an envelope, which will build this complex object for me (pay attention, both Postman and Envelope are immutable and annotated with @Immutable from jcabi-aspects):
@Immutable
interface Envelope {
Message unwrap();
}I also refactor the Postman to accept an envelope, not a message:
@Immutable
interface Postman {
void send(Envelope env);
}So far, so good. Now let’s try to create a simple implementation of Envelope:
@Immutable
class MIME implements Envelope {
@Override
public Message unwrap() {
return new MimeMessage(
Session.getDefaultInstance(new Properties())
);
}
}It works, but it does nothing useful yet. It only creates an absolutely empty MIME message and returns it. How about adding a subject to it and both To: and From: addresses (pay attention, MIME class is also immutable):
@Immutable
class Envelope.MIME implements Envelope {
private final String subject;
private final String from;
private final Array<String> to;
public MIME(String subj, String sender, Iterable<String> rcpts) {
this.subject = subj;
this.from = sender;
this.to = new Array<String>(rcpts);
}
@Override
public Message unwrap() {
Message msg = new MimeMessage(
Session.getDefaultInstance(new Properties())
);
msg.setSubject(this.subject);
msg.setFrom(new InternetAddress(this.from));
for (String email : this.to) {
msg.setRecipient(
Message.RecipientType.TO,
new InternetAddress(email)
);
}
return msg;
}
}Looks correct and it works. But it is still too primitive. How about CC: and BCC:? What about email text? How about PDF enclosures? What if I want to specify the encoding of the message? What about Reply-To?
Can I add all these parameters to the constructor? Remember, the class is immutable and I can’t introduce the setReplyTo() method. I have to pass the replyTo argument into its constructor. It’s impossible, because the constructor will have too many arguments, and nobody will be able to use it.
So, what do I do?
Well, I started to think: how can we break the concept of an “envelope” into smaller concepts—and this what I invented. Like a real-life envelope, my MIME object will have stamps. Stamps will be responsible for configuring an object Message (again, Stamp is immutable, as well as all its implementers):
@Immutable
interface Stamp {
void attach(Message message);
}Now, I can simplify my MIME class to the following:
@Immutable
class Envelope.MIME implements Envelope {
private final Array<Stamp> stamps;
public MIME(Iterable<Stamp> stmps) {
this.stamps = new Array<Stamp>(stmps);
}
@Override
public Message unwrap() {
Message msg = new MimeMessage(
Session.getDefaultInstance(new Properties())
);
for (Stamp stamp : this.stamps) {
stamp.attach(msg);
}
return msg;
}
}Now, I will create stamps for the subject, for To:, for From:, for CC:, for BCC:, etc. As many stamps as I like. The class MIME will stay the same—small, cohesive, readable, solid, etc.
What is important here is why I made the decision to refactor while the class was relatively small. Indeed, I started to worry about these stamp classes when my MIME class was just 25 lines in size.
That is exactly the point of this article—_immutability forces you to design small and cohesive objects_.
Without immutability, I would have gone the same direction as commons-email. My MIME class would grow in size and sooner or later would become as big as Email from commons-email. The only thing that stopped me was the necessity to refactor it, because I wasn’t able to pass all arguments through a constructor.
Without immutability, I wouldn’t have had that motivator and I would have done what Apache developers did with commons-email—bloat the class and turn it into an unmaintainable monster.
That’s jcabi-email. I hope this example was illustrative enough and that you will start writing cleaner code with immutable objects.
" /> Getters/Setters. Evil. Period. Objects Should Be Immutable, and Dependency Injection Containers are Code Polluters, I universally labeled all mutable objects with “setters” (object methods starting withset) evil. My argumentation was based mostly on metaphors and abstract examples. Apparently, this wasn’t convincing enough for many of you—I received a few requests asking to provide more specific and practical examples.Thus, in order to illustrate my strongly negative attitude to “mutability via setters,” I took an existing commons-email Java library from Apache and re-designed it my way, without setters and with “object thinking” in mind. I released my library as part of the jcabi family—jcabi-email. Let’s see what benefits we get from a “pure” object-oriented and immutable approach, without getters.
Here is how your code will look, if you send an email using commons-email:
Email email = new SimpleEmail();
email.setHostName("smtp.googlemail.com");
email.setSmtpPort(465);
email.setAuthenticator(new DefaultAuthenticator("user", "pwd"));
email.setFrom("yegor256@gmail.com", "Yegor Bugayenko");
email.addTo("dude@jcabi.com");
email.setSubject("how are you?");
email.setMsg("Dude, how are you?");
email.send();Here is how you do the same with jcabi-email:
Postman postman = new Postman.Default(
new SMTP("smtp.googlemail.com", 465, "user", "pwd")
);
Envelope envelope = new Envelope.MIME(
new Array<Stamp>(
new StSender("Yegor Bugayenko <yegor256@gmail.com>"),
new StRecipient("dude@jcabi.com"),
new StSubject("how are you?")
),
new Array<Enclosure>(
new EnPlain("Dude, how are you?")
)
);
postman.send(envelope);I think the difference is obvious.
In the first example, you’re dealing with a monster class that can do everything for you, including sending your MIME message via SMTP, creating the message, configuring its parameters, adding MIME parts to it, etc. The Email class from commons-email is really a huge class—33 private properties, over a hundred methods, about two thousands lines of code. First, you configure the class through a bunch of setters and then you ask it to send() an email for you.
In the second example, we have seven objects instantiated via seven new calls. Postman is responsible for packaging a MIME message; SMTP is responsible for sending it via SMTP; stamps (StSender, StRecipient, and StSubject) are responsible for configuring the MIME message before delivery; enclosure EnPlain is responsible for creating a MIME part for the message we’re going to send. We construct these seven objects, encapsulating one into another, and then we ask the postman to send() the envelope for us.
What’s Wrong With a Mutable Email?
From a user perspective, there is almost nothing wrong. Email is a powerful class with multiple controls—just hit the right one and the job gets done. However, from a developer perspective Email class is a nightmare. Mostly because the class is very big and difficult to maintain.
Because the class is so big, every time you want to extend it by introducing a new method, you’re facing the fact that you’re making the class even worse—longer, less cohesive, less readable, less maintainable, etc. You have a feeling that you’re digging into something dirty and that there is no hope to make it cleaner, ever. I’m sure, you’re familiar with this feeling—most legacy applications look that way. They have huge multi-line “classes” (in reality, COBOL programs written in Java) that were inherited from a few generations of programmers before you. When you start, you’re full of energy, but after a few minutes of scrolling such a “class” you say—“screw it, it’s almost Saturday.”
Because the class is so big, there is no data hiding or encapsulation any more—33 variables are accessible by over 100 methods. What is hidden? This Email.java file in reality is a big, procedural 2000-line script, called a “class” by mistake. Nothing is hidden, once you cross the border of the class by calling one of its methods. After that, you have full access to all the data you may need. Why is this bad? Well, why do we need encapsulation in the first place? In order to protect one programmer from another, aka defensive programming. While I’m busy changing the subject of the MIME message, I want to be sure that I’m not interfered with by some other method’s activity, that is changing a sender and touching my subject by mistake. Encapsulation helps us narrow down the scope of the problem, while this Email class is doing exactly the opposite.
Because the class is so big, its unit testing is even more complicated than the class itself. Why? Because of multiple inter-dependencies between its methods and properties. In order to test setCharset() you have to prepare the entire object by calling a few other methods, then you have to call send() to make sure the message being sent actually uses the encoding you specified. Thus, in order to test a one-line method setCharset() you run the entire integration testing scenario of sending a full MIME message through SMTP. Obviously, if something gets changed in one of the methods, almost every test method will be affected. In other words, tests are very fragile, unreliable and over-complicated.
I can go on and on with this “because the class is so big,” but I think it is obvious that a small, cohesive class is always better than a big one. It is obvious to me, to you, and to any object-oriented programmer. But why is it not so obvious to the developers of Apache Commons Email? I don’t think they are stupid or un-educated. What is it then?
How and Why Did It Happen?
This is how it always happens. You start to design a class as something cohesive, solid, and small. Your intentions are very positive. Very soon you realize that there is something else that this class has to do. Then, something else. Then, even more.
The best way to make your class more and more powerful is by adding setters that inject configuration parameters into the class so that it can process them inside, isn’t it?
This is the root cause of the problem! The root cause is our ability to insert data into mutable objects via configuration methods, also known as “setters.” When an object is mutable and allows us to add setters whenever we want, we will do it without limits.
Let me put it this way—_mutable classes tend to grow in size and lose cohesiveness_.
If commons-email authors made this Email class immutable in the beginning, they wouldn’t have been able to add so many methods into it and encapsulate so many properties. They wouldn’t be able to turn it into a monster. Why? Because an immutable object only accepts a state through a constructor. Can you imagine a 33-argument constructor? Of course, not.
When you make your class immutable in the first place, you are forced to keep it cohesive, small, solid and robust. Because you can’t encapsulate too much and you can’t modify what’s encapsulated. Just two or three arguments of a constructor and you’re done.
How Did I Design An Immutable Email?
When I was designing jcabi-email I started with a small and simple class: Postman. Well, it is an interface, since I never make interface-less classes. So, Postman is… a post man. He is delivering messages to other people. First, I created a default version of it (I omit the ctor, for the sake of brevity):
import javax.mail.Message;
@Immutable
class Postman.Default implements Postman {
private final String host;
private final int port;
private final String user;
private final String password;
@Override
void send(Message msg) {
// create SMTP session
// create transport
// transport.connect(this.host, this.port, etc.)
// transport.send(msg)
// transport.close();
}
}Good start, it works. What now? Well, the Message is difficult to construct. It is a complex class from JDK that requires some manipulations before it can become a nice HTML email. So I created an envelope, which will build this complex object for me (pay attention, both Postman and Envelope are immutable and annotated with @Immutable from jcabi-aspects):
@Immutable
interface Envelope {
Message unwrap();
}I also refactor the Postman to accept an envelope, not a message:
@Immutable
interface Postman {
void send(Envelope env);
}So far, so good. Now let’s try to create a simple implementation of Envelope:
@Immutable
class MIME implements Envelope {
@Override
public Message unwrap() {
return new MimeMessage(
Session.getDefaultInstance(new Properties())
);
}
}It works, but it does nothing useful yet. It only creates an absolutely empty MIME message and returns it. How about adding a subject to it and both To: and From: addresses (pay attention, MIME class is also immutable):
@Immutable
class Envelope.MIME implements Envelope {
private final String subject;
private final String from;
private final Array<String> to;
public MIME(String subj, String sender, Iterable<String> rcpts) {
this.subject = subj;
this.from = sender;
this.to = new Array<String>(rcpts);
}
@Override
public Message unwrap() {
Message msg = new MimeMessage(
Session.getDefaultInstance(new Properties())
);
msg.setSubject(this.subject);
msg.setFrom(new InternetAddress(this.from));
for (String email : this.to) {
msg.setRecipient(
Message.RecipientType.TO,
new InternetAddress(email)
);
}
return msg;
}
}Looks correct and it works. But it is still too primitive. How about CC: and BCC:? What about email text? How about PDF enclosures? What if I want to specify the encoding of the message? What about Reply-To?
Can I add all these parameters to the constructor? Remember, the class is immutable and I can’t introduce the setReplyTo() method. I have to pass the replyTo argument into its constructor. It’s impossible, because the constructor will have too many arguments, and nobody will be able to use it.
So, what do I do?
Well, I started to think: how can we break the concept of an “envelope” into smaller concepts—and this what I invented. Like a real-life envelope, my MIME object will have stamps. Stamps will be responsible for configuring an object Message (again, Stamp is immutable, as well as all its implementers):
@Immutable
interface Stamp {
void attach(Message message);
}Now, I can simplify my MIME class to the following:
@Immutable
class Envelope.MIME implements Envelope {
private final Array<Stamp> stamps;
public MIME(Iterable<Stamp> stmps) {
this.stamps = new Array<Stamp>(stmps);
}
@Override
public Message unwrap() {
Message msg = new MimeMessage(
Session.getDefaultInstance(new Properties())
);
for (Stamp stamp : this.stamps) {
stamp.attach(msg);
}
return msg;
}
}Now, I will create stamps for the subject, for To:, for From:, for CC:, for BCC:, etc. As many stamps as I like. The class MIME will stay the same—small, cohesive, readable, solid, etc.
What is important here is why I made the decision to refactor while the class was relatively small. Indeed, I started to worry about these stamp classes when my MIME class was just 25 lines in size.
That is exactly the point of this article—_immutability forces you to design small and cohesive objects_.
Without immutability, I would have gone the same direction as commons-email. My MIME class would grow in size and sooner or later would become as big as Email from commons-email. The only thing that stopped me was the necessity to refactor it, because I wasn’t able to pass all arguments through a constructor.
Without immutability, I wouldn’t have had that motivator and I would have done what Apache developers did with commons-email—bloat the class and turn it into an unmaintainable monster.
That’s jcabi-email. I hope this example was illustrative enough and that you will start writing cleaner code with immutable objects.
"/>
https://www.yegor256.com/2014/11/07/how-immutability-helps.html
How Immutability Helps
- Yegor Bugayenko
- comments
- Discussed at:
In a few recent posts, including Getters/Setters. Evil. Period. Objects Should Be Immutable, and Dependency Injection Containers are Code Polluters, I universally labeled all mutable objects with “setters” (object methods starting with set) evil. My argumentation was based mostly on metaphors and abstract examples. Apparently, this wasn’t convincing enough for many of you—I received a few requests asking to provide more specific and practical examples.
Thus, in order to illustrate my strongly negative attitude to “mutability via setters,” I took an existing commons-email Java library from Apache and re-designed it my way, without setters and with “object thinking” in mind. I released my library as part of the jcabi family—jcabi-email. Let’s see what benefits we get from a “pure” object-oriented and immutable approach, without getters.
Here is how your code will look, if you send an email using commons-email:
Email email = new SimpleEmail();
email.setHostName("smtp.googlemail.com");
email.setSmtpPort(465);
email.setAuthenticator(new DefaultAuthenticator("user", "pwd"));
email.setFrom("yegor256@gmail.com", "Yegor Bugayenko");
email.addTo("dude@jcabi.com");
email.setSubject("how are you?");
email.setMsg("Dude, how are you?");
email.send();Here is how you do the same with jcabi-email:
Postman postman = new Postman.Default(
new SMTP("smtp.googlemail.com", 465, "user", "pwd")
);
Envelope envelope = new Envelope.MIME(
new Array<Stamp>(
new StSender("Yegor Bugayenko <yegor256@gmail.com>"),
new StRecipient("dude@jcabi.com"),
new StSubject("how are you?")
),
new Array<Enclosure>(
new EnPlain("Dude, how are you?")
)
);
postman.send(envelope);I think the difference is obvious.
In the first example, you’re dealing with a monster class that can do everything for you, including sending your MIME message via SMTP, creating the message, configuring its parameters, adding MIME parts to it, etc. The Email class from commons-email is really a huge class—33 private properties, over a hundred methods, about two thousands lines of code. First, you configure the class through a bunch of setters and then you ask it to send() an email for you.
In the second example, we have seven objects instantiated via seven new calls. Postman is responsible for packaging a MIME message; SMTP is responsible for sending it via SMTP; stamps (StSender, StRecipient, and StSubject) are responsible for configuring the MIME message before delivery; enclosure EnPlain is responsible for creating a MIME part for the message we’re going to send. We construct these seven objects, encapsulating one into another, and then we ask the postman to send() the envelope for us.
What’s Wrong With a Mutable Email?
From a user perspective, there is almost nothing wrong. Email is a powerful class with multiple controls—just hit the right one and the job gets done. However, from a developer perspective Email class is a nightmare. Mostly because the class is very big and difficult to maintain.
Because the class is so big, every time you want to extend it by introducing a new method, you’re facing the fact that you’re making the class even worse—longer, less cohesive, less readable, less maintainable, etc. You have a feeling that you’re digging into something dirty and that there is no hope to make it cleaner, ever. I’m sure, you’re familiar with this feeling—most legacy applications look that way. They have huge multi-line “classes” (in reality, COBOL programs written in Java) that were inherited from a few generations of programmers before you. When you start, you’re full of energy, but after a few minutes of scrolling such a “class” you say—“screw it, it’s almost Saturday.”
Because the class is so big, there is no data hiding or encapsulation any more—33 variables are accessible by over 100 methods. What is hidden? This Email.java file in reality is a big, procedural 2000-line script, called a “class” by mistake. Nothing is hidden, once you cross the border of the class by calling one of its methods. After that, you have full access to all the data you may need. Why is this bad? Well, why do we need encapsulation in the first place? In order to protect one programmer from another, aka defensive programming. While I’m busy changing the subject of the MIME message, I want to be sure that I’m not interfered with by some other method’s activity, that is changing a sender and touching my subject by mistake. Encapsulation helps us narrow down the scope of the problem, while this Email class is doing exactly the opposite.
Because the class is so big, its unit testing is even more complicated than the class itself. Why? Because of multiple inter-dependencies between its methods and properties. In order to test setCharset() you have to prepare the entire object by calling a few other methods, then you have to call send() to make sure the message being sent actually uses the encoding you specified. Thus, in order to test a one-line method setCharset() you run the entire integration testing scenario of sending a full MIME message through SMTP. Obviously, if something gets changed in one of the methods, almost every test method will be affected. In other words, tests are very fragile, unreliable and over-complicated.
I can go on and on with this “because the class is so big,” but I think it is obvious that a small, cohesive class is always better than a big one. It is obvious to me, to you, and to any object-oriented programmer. But why is it not so obvious to the developers of Apache Commons Email? I don’t think they are stupid or un-educated. What is it then?
How and Why Did It Happen?
This is how it always happens. You start to design a class as something cohesive, solid, and small. Your intentions are very positive. Very soon you realize that there is something else that this class has to do. Then, something else. Then, even more.
The best way to make your class more and more powerful is by adding setters that inject configuration parameters into the class so that it can process them inside, isn’t it?
This is the root cause of the problem! The root cause is our ability to insert data into mutable objects via configuration methods, also known as “setters.” When an object is mutable and allows us to add setters whenever we want, we will do it without limits.
Let me put it this way—_mutable classes tend to grow in size and lose cohesiveness_.
If commons-email authors made this Email class immutable in the beginning, they wouldn’t have been able to add so many methods into it and encapsulate so many properties. They wouldn’t be able to turn it into a monster. Why? Because an immutable object only accepts a state through a constructor. Can you imagine a 33-argument constructor? Of course, not.
When you make your class immutable in the first place, you are forced to keep it cohesive, small, solid and robust. Because you can’t encapsulate too much and you can’t modify what’s encapsulated. Just two or three arguments of a constructor and you’re done.
How Did I Design An Immutable Email?
When I was designing jcabi-email I started with a small and simple class: Postman. Well, it is an interface, since I never make interface-less classes. So, Postman is… a post man. He is delivering messages to other people. First, I created a default version of it (I omit the ctor, for the sake of brevity):
import javax.mail.Message;
@Immutable
class Postman.Default implements Postman {
private final String host;
private final int port;
private final String user;
private final String password;
@Override
void send(Message msg) {
// create SMTP session
// create transport
// transport.connect(this.host, this.port, etc.)
// transport.send(msg)
// transport.close();
}
}Good start, it works. What now? Well, the Message is difficult to construct. It is a complex class from JDK that requires some manipulations before it can become a nice HTML email. So I created an envelope, which will build this complex object for me (pay attention, both Postman and Envelope are immutable and annotated with @Immutable from jcabi-aspects):
@Immutable
interface Envelope {
Message unwrap();
}I also refactor the Postman to accept an envelope, not a message:
@Immutable
interface Postman {
void send(Envelope env);
}So far, so good. Now let’s try to create a simple implementation of Envelope:
@Immutable
class MIME implements Envelope {
@Override
public Message unwrap() {
return new MimeMessage(
Session.getDefaultInstance(new Properties())
);
}
}It works, but it does nothing useful yet. It only creates an absolutely empty MIME message and returns it. How about adding a subject to it and both To: and From: addresses (pay attention, MIME class is also immutable):
@Immutable
class Envelope.MIME implements Envelope {
private final String subject;
private final String from;
private final Array<String> to;
public MIME(String subj, String sender, Iterable<String> rcpts) {
this.subject = subj;
this.from = sender;
this.to = new Array<String>(rcpts);
}
@Override
public Message unwrap() {
Message msg = new MimeMessage(
Session.getDefaultInstance(new Properties())
);
msg.setSubject(this.subject);
msg.setFrom(new InternetAddress(this.from));
for (String email : this.to) {
msg.setRecipient(
Message.RecipientType.TO,
new InternetAddress(email)
);
}
return msg;
}
}Looks correct and it works. But it is still too primitive. How about CC: and BCC:? What about email text? How about PDF enclosures? What if I want to specify the encoding of the message? What about Reply-To?
Can I add all these parameters to the constructor? Remember, the class is immutable and I can’t introduce the setReplyTo() method. I have to pass the replyTo argument into its constructor. It’s impossible, because the constructor will have too many arguments, and nobody will be able to use it.
So, what do I do?
Well, I started to think: how can we break the concept of an “envelope” into smaller concepts—and this what I invented. Like a real-life envelope, my MIME object will have stamps. Stamps will be responsible for configuring an object Message (again, Stamp is immutable, as well as all its implementers):
@Immutable
interface Stamp {
void attach(Message message);
}Now, I can simplify my MIME class to the following:
@Immutable
class Envelope.MIME implements Envelope {
private final Array<Stamp> stamps;
public MIME(Iterable<Stamp> stmps) {
this.stamps = new Array<Stamp>(stmps);
}
@Override
public Message unwrap() {
Message msg = new MimeMessage(
Session.getDefaultInstance(new Properties())
);
for (Stamp stamp : this.stamps) {
stamp.attach(msg);
}
return msg;
}
}Now, I will create stamps for the subject, for To:, for From:, for CC:, for BCC:, etc. As many stamps as I like. The class MIME will stay the same—small, cohesive, readable, solid, etc.
What is important here is why I made the decision to refactor while the class was relatively small. Indeed, I started to worry about these stamp classes when my MIME class was just 25 lines in size.
That is exactly the point of this article—_immutability forces you to design small and cohesive objects_.
Without immutability, I would have gone the same direction as commons-email. My MIME class would grow in size and sooner or later would become as big as Email from commons-email. The only thing that stopped me was the necessity to refactor it, because I wasn’t able to pass all arguments through a constructor.
Without immutability, I wouldn’t have had that motivator and I would have done what Apache developers did with commons-email—bloat the class and turn it into an unmaintainable monster.
That’s jcabi-email. I hope this example was illustrative enough and that you will start writing cleaner code with immutable objects.
In a few recent posts, including Getters/Setters. Evil. Period. Objects Should Be Immutable, and Dependency Injection Containers are Code Polluters, I universally labeled all mutable objects with “setters” (object methods starting with set) evil. My argumentation was based mostly on metaphors and abstract examples. Apparently, this wasn’t convincing enough for many of you—I received a few requests asking to provide more specific and practical examples.
Thus, in order to illustrate my strongly negative attitude to “mutability via setters,” I took an existing commons-email Java library from Apache and re-designed it my way, without setters and with “object thinking” in mind. I released my library as part of the jcabi family—jcabi-email. Let’s see what benefits we get from a “pure” object-oriented and immutable approach, without getters.
Here is how your code will look, if you send an email using commons-email:
Email email = new SimpleEmail();
email.setHostName("smtp.googlemail.com");
email.setSmtpPort(465);
email.setAuthenticator(new DefaultAuthenticator("user", "pwd"));
email.setFrom("yegor256@gmail.com", "Yegor Bugayenko");
email.addTo("dude@jcabi.com");
email.setSubject("how are you?");
email.setMsg("Dude, how are you?");
email.send();Here is how you do the same with jcabi-email:
Postman postman = new Postman.Default(
new SMTP("smtp.googlemail.com", 465, "user", "pwd")
);
Envelope envelope = new Envelope.MIME(
new Array<Stamp>(
new StSender("Yegor Bugayenko <yegor256@gmail.com>"),
new StRecipient("dude@jcabi.com"),
new StSubject("how are you?")
),
new Array<Enclosure>(
new EnPlain("Dude, how are you?")
)
);
postman.send(envelope);I think the difference is obvious.
In the first example, you’re dealing with a monster class that can do everything for you, including sending your MIME message via SMTP, creating the message, configuring its parameters, adding MIME parts to it, etc. The Email class from commons-email is really a huge class—33 private properties, over a hundred methods, about two thousands lines of code. First, you configure the class through a bunch of setters and then you ask it to send() an email for you.
In the second example, we have seven objects instantiated via seven new calls. Postman is responsible for packaging a MIME message; SMTP is responsible for sending it via SMTP; stamps (StSender, StRecipient, and StSubject) are responsible for configuring the MIME message before delivery; enclosure EnPlain is responsible for creating a MIME part for the message we’re going to send. We construct these seven objects, encapsulating one into another, and then we ask the postman to send() the envelope for us.
What’s Wrong With a Mutable Email?
From a user perspective, there is almost nothing wrong. Email is a powerful class with multiple controls—just hit the right one and the job gets done. However, from a developer perspective Email class is a nightmare. Mostly because the class is very big and difficult to maintain.
Because the class is so big, every time you want to extend it by introducing a new method, you’re facing the fact that you’re making the class even worse—longer, less cohesive, less readable, less maintainable, etc. You have a feeling that you’re digging into something dirty and that there is no hope to make it cleaner, ever. I’m sure, you’re familiar with this feeling—most legacy applications look that way. They have huge multi-line “classes” (in reality, COBOL programs written in Java) that were inherited from a few generations of programmers before you. When you start, you’re full of energy, but after a few minutes of scrolling such a “class” you say—“screw it, it’s almost Saturday.”
Because the class is so big, there is no data hiding or encapsulation any more—33 variables are accessible by over 100 methods. What is hidden? This Email.java file in reality is a big, procedural 2000-line script, called a “class” by mistake. Nothing is hidden, once you cross the border of the class by calling one of its methods. After that, you have full access to all the data you may need. Why is this bad? Well, why do we need encapsulation in the first place? In order to protect one programmer from another, aka defensive programming. While I’m busy changing the subject of the MIME message, I want to be sure that I’m not interfered with by some other method’s activity, that is changing a sender and touching my subject by mistake. Encapsulation helps us narrow down the scope of the problem, while this Email class is doing exactly the opposite.
Because the class is so big, its unit testing is even more complicated than the class itself. Why? Because of multiple inter-dependencies between its methods and properties. In order to test setCharset() you have to prepare the entire object by calling a few other methods, then you have to call send() to make sure the message being sent actually uses the encoding you specified. Thus, in order to test a one-line method setCharset() you run the entire integration testing scenario of sending a full MIME message through SMTP. Obviously, if something gets changed in one of the methods, almost every test method will be affected. In other words, tests are very fragile, unreliable and over-complicated.
I can go on and on with this “because the class is so big,” but I think it is obvious that a small, cohesive class is always better than a big one. It is obvious to me, to you, and to any object-oriented programmer. But why is it not so obvious to the developers of Apache Commons Email? I don’t think they are stupid or un-educated. What is it then?
How and Why Did It Happen?
This is how it always happens. You start to design a class as something cohesive, solid, and small. Your intentions are very positive. Very soon you realize that there is something else that this class has to do. Then, something else. Then, even more.
The best way to make your class more and more powerful is by adding setters that inject configuration parameters into the class so that it can process them inside, isn’t it?
This is the root cause of the problem! The root cause is our ability to insert data into mutable objects via configuration methods, also known as “setters.” When an object is mutable and allows us to add setters whenever we want, we will do it without limits.
Let me put it this way—_mutable classes tend to grow in size and lose cohesiveness_.
If commons-email authors made this Email class immutable in the beginning, they wouldn’t have been able to add so many methods into it and encapsulate so many properties. They wouldn’t be able to turn it into a monster. Why? Because an immutable object only accepts a state through a constructor. Can you imagine a 33-argument constructor? Of course, not.
When you make your class immutable in the first place, you are forced to keep it cohesive, small, solid and robust. Because you can’t encapsulate too much and you can’t modify what’s encapsulated. Just two or three arguments of a constructor and you’re done.
How Did I Design An Immutable Email?
When I was designing jcabi-email I started with a small and simple class: Postman. Well, it is an interface, since I never make interface-less classes. So, Postman is… a post man. He is delivering messages to other people. First, I created a default version of it (I omit the ctor, for the sake of brevity):
import javax.mail.Message;
@Immutable
class Postman.Default implements Postman {
private final String host;
private final int port;
private final String user;
private final String password;
@Override
void send(Message msg) {
// create SMTP session
// create transport
// transport.connect(this.host, this.port, etc.)
// transport.send(msg)
// transport.close();
}
}Good start, it works. What now? Well, the Message is difficult to construct. It is a complex class from JDK that requires some manipulations before it can become a nice HTML email. So I created an envelope, which will build this complex object for me (pay attention, both Postman and Envelope are immutable and annotated with @Immutable from jcabi-aspects):
@Immutable
interface Envelope {
Message unwrap();
}I also refactor the Postman to accept an envelope, not a message:
@Immutable
interface Postman {
void send(Envelope env);
}So far, so good. Now let’s try to create a simple implementation of Envelope:
@Immutable
class MIME implements Envelope {
@Override
public Message unwrap() {
return new MimeMessage(
Session.getDefaultInstance(new Properties())
);
}
}It works, but it does nothing useful yet. It only creates an absolutely empty MIME message and returns it. How about adding a subject to it and both To: and From: addresses (pay attention, MIME class is also immutable):
@Immutable
class Envelope.MIME implements Envelope {
private final String subject;
private final String from;
private final Array<String> to;
public MIME(String subj, String sender, Iterable<String> rcpts) {
this.subject = subj;
this.from = sender;
this.to = new Array<String>(rcpts);
}
@Override
public Message unwrap() {
Message msg = new MimeMessage(
Session.getDefaultInstance(new Properties())
);
msg.setSubject(this.subject);
msg.setFrom(new InternetAddress(this.from));
for (String email : this.to) {
msg.setRecipient(
Message.RecipientType.TO,
new InternetAddress(email)
);
}
return msg;
}
}Looks correct and it works. But it is still too primitive. How about CC: and BCC:? What about email text? How about PDF enclosures? What if I want to specify the encoding of the message? What about Reply-To?
Can I add all these parameters to the constructor? Remember, the class is immutable and I can’t introduce the setReplyTo() method. I have to pass the replyTo argument into its constructor. It’s impossible, because the constructor will have too many arguments, and nobody will be able to use it.
So, what do I do?
Well, I started to think: how can we break the concept of an “envelope” into smaller concepts—and this what I invented. Like a real-life envelope, my MIME object will have stamps. Stamps will be responsible for configuring an object Message (again, Stamp is immutable, as well as all its implementers):
@Immutable
interface Stamp {
void attach(Message message);
}Now, I can simplify my MIME class to the following:
@Immutable
class Envelope.MIME implements Envelope {
private final Array<Stamp> stamps;
public MIME(Iterable<Stamp> stmps) {
this.stamps = new Array<Stamp>(stmps);
}
@Override
public Message unwrap() {
Message msg = new MimeMessage(
Session.getDefaultInstance(new Properties())
);
for (Stamp stamp : this.stamps) {
stamp.attach(msg);
}
return msg;
}
}Now, I will create stamps for the subject, for To:, for From:, for CC:, for BCC:, etc. As many stamps as I like. The class MIME will stay the same—small, cohesive, readable, solid, etc.
What is important here is why I made the decision to refactor while the class was relatively small. Indeed, I started to worry about these stamp classes when my MIME class was just 25 lines in size.
That is exactly the point of this article—_immutability forces you to design small and cohesive objects_.
Without immutability, I would have gone the same direction as commons-email. My MIME class would grow in size and sooner or later would become as big as Email from commons-email. The only thing that stopped me was the necessity to refactor it, because I wasn’t able to pass all arguments through a constructor.
Without immutability, I wouldn’t have had that motivator and I would have done what Apache developers did with commons-email—bloat the class and turn it into an unmaintainable monster.
That’s jcabi-email. I hope this example was illustrative enough and that you will start writing cleaner code with immutable objects.
Please, use syntax highlighting in your comments, to make them more readable.
I’m not going to discuss obvious arguments against “setter injections” (like in Spring IoC) and “field injections” (like in PicoContainer). These mechanisms simply violate basic principles of object-oriented programming and encourage us to create incomplete, mutable objects, that get stuffed with data during the course of application execution. Remember: ideal objects must be immutable and may not contain setters.
Instead, let’s talk about “constructor injection” (like in Google Guice) and its use with dependency injection containers. I’ll try to show why I consider these containers a redundancy, at least.
What is Dependency Injection?
This is what dependency injection is (not really different from a plain old object composition):
public class Budget {
private final DB db;
public Budget(DB data) {
this.db = data;
}
public long total() {
return this.db.cell(
"SELECT SUM(cost) FROM ledger"
);
}
}The object data is called a “dependency.”
A Budget doesn’t know what kind of database it is working with. All it needs from the database is its ability to fetch a cell, using an arbitrary SQL query, via method cell(). We can instantiate a Budget with a PostgreSQL implementation of the DB interface, for example:
public class App {
public static void main(String... args) {
Budget budget = new Budget(
new Postgres("jdbc:postgresql:5740/main")
);
System.out.println("Total is: " + budget.total());
}
}In other words, we’re “injecting” a dependency into a new object budget.
An alternative to this “dependency injection” approach would be to let Budget decide what database it wants to work with:
public class Budget {
private final DB db =
new Postgres("jdbc:postgresql:5740/main");
// class methods
}This is very dirty and leads to 1) code duplication, 2) inability to reuse, and 3) inability to test, etc. No need to discuss why. It’s obvious.
Thus, dependency injection via a constructor is an amazing technique. Well, not even a technique, really. More like a feature of Java and all other object-oriented languages. It’s expected that almost any object will want to encapsulate some knowledge (aka, a “state”). That’s what constructors are for.
What is a DI Container?
So far so good, but here comes the dark side—a dependency injection container. Here is how it works (let’s use Google Guice as an example):
import javax.inject.Inject;
public class Budget {
private final DB db;
@Inject
public Budget(DB data) {
this.db = data;
}
// same methods as above
}Pay attention: the constructor is annotated with @Inject.
Then, we’re supposed to configure a container somewhere, when the application starts:
Injector injector = Guice.createInjector(
new AbstractModule() {
@Override
public void configure() {
this.bind(DB.class).toInstance(
new Postgres("jdbc:postgresql:5740/main")
);
}
}
);Some frameworks even allow us to configure the injector in an XML file.
From now on, we are not allowed to instantiate Budget through the new operator, like we did before. Instead, we should use the injector we just created:
public class App {
public static void main(String... args) {
Injection injector = // as we just did in the previous snippet
Budget budget = injector.getInstance(Budget.class);
System.out.println("Total is: " + budget.total());
}
}The injection automatically finds out that in order to instantiate a Budget it has to provide an argument for its constructor. It will use an instance of class Postgres, which we instantiated in the injector.
This is the right and recommended way to use Guice. There are a few even darker patterns, though, which are possible but not recommended. For example, you can make your injector a singleton and use it right inside the Budget class. These mechanisms are considered wrong even by DI container makers, however, so let’s ignore them and focus on the recommended scenario.
What Is This For?
Let me reiterate and summarize the scenarios of incorrect usage of dependency injection containers:
Field injection
Setter injection
Passing injector as a dependency
Making injector a global singleton
If we put all of them aside, all we have left is the constructor injection explained above. And how does that help us? Why do we need it? Why can’t we use plain old new in the main class of the application?
The container we created simply adds more lines to the code base, or even more files, if we use XML. And it doesn’t add anything, except an additional complexity. We should always remember this if we have the question: “What database is used as an argument of a Budget?”
The Right Way
Now, let me show you a real life example of using new to construct an application. This is how we create a “thinking engine” in rultor.com (full class is in Agents.java):
Impressive? This is a true object composition. I believe this is how a proper object-oriented application should be instantiated.
And DI containers? In my opinion, they just add unnecessary noise.
" /> dependency injection (aka, “DI”) is a natural technique of composing objects in OOP (known long before the term was introduced by Martin Fowler), Spring IoC, Google Guice, Java EE6 CDI, Dagger and other DI frameworks turn it into an anti-pattern.I’m not going to discuss obvious arguments against “setter injections” (like in Spring IoC) and “field injections” (like in PicoContainer). These mechanisms simply violate basic principles of object-oriented programming and encourage us to create incomplete, mutable objects, that get stuffed with data during the course of application execution. Remember: ideal objects must be immutable and may not contain setters.
Instead, let’s talk about “constructor injection” (like in Google Guice) and its use with dependency injection containers. I’ll try to show why I consider these containers a redundancy, at least.
What is Dependency Injection?
This is what dependency injection is (not really different from a plain old object composition):
public class Budget {
private final DB db;
public Budget(DB data) {
this.db = data;
}
public long total() {
return this.db.cell(
"SELECT SUM(cost) FROM ledger"
);
}
}The object data is called a “dependency.”
A Budget doesn’t know what kind of database it is working with. All it needs from the database is its ability to fetch a cell, using an arbitrary SQL query, via method cell(). We can instantiate a Budget with a PostgreSQL implementation of the DB interface, for example:
public class App {
public static void main(String... args) {
Budget budget = new Budget(
new Postgres("jdbc:postgresql:5740/main")
);
System.out.println("Total is: " + budget.total());
}
}In other words, we’re “injecting” a dependency into a new object budget.
An alternative to this “dependency injection” approach would be to let Budget decide what database it wants to work with:
public class Budget {
private final DB db =
new Postgres("jdbc:postgresql:5740/main");
// class methods
}This is very dirty and leads to 1) code duplication, 2) inability to reuse, and 3) inability to test, etc. No need to discuss why. It’s obvious.
Thus, dependency injection via a constructor is an amazing technique. Well, not even a technique, really. More like a feature of Java and all other object-oriented languages. It’s expected that almost any object will want to encapsulate some knowledge (aka, a “state”). That’s what constructors are for.
What is a DI Container?
So far so good, but here comes the dark side—a dependency injection container. Here is how it works (let’s use Google Guice as an example):
import javax.inject.Inject;
public class Budget {
private final DB db;
@Inject
public Budget(DB data) {
this.db = data;
}
// same methods as above
}Pay attention: the constructor is annotated with @Inject.
Then, we’re supposed to configure a container somewhere, when the application starts:
Injector injector = Guice.createInjector(
new AbstractModule() {
@Override
public void configure() {
this.bind(DB.class).toInstance(
new Postgres("jdbc:postgresql:5740/main")
);
}
}
);Some frameworks even allow us to configure the injector in an XML file.
From now on, we are not allowed to instantiate Budget through the new operator, like we did before. Instead, we should use the injector we just created:
public class App {
public static void main(String... args) {
Injection injector = // as we just did in the previous snippet
Budget budget = injector.getInstance(Budget.class);
System.out.println("Total is: " + budget.total());
}
}The injection automatically finds out that in order to instantiate a Budget it has to provide an argument for its constructor. It will use an instance of class Postgres, which we instantiated in the injector.
This is the right and recommended way to use Guice. There are a few even darker patterns, though, which are possible but not recommended. For example, you can make your injector a singleton and use it right inside the Budget class. These mechanisms are considered wrong even by DI container makers, however, so let’s ignore them and focus on the recommended scenario.
What Is This For?
Let me reiterate and summarize the scenarios of incorrect usage of dependency injection containers:
Field injection
Setter injection
Passing injector as a dependency
Making injector a global singleton
If we put all of them aside, all we have left is the constructor injection explained above. And how does that help us? Why do we need it? Why can’t we use plain old new in the main class of the application?
The container we created simply adds more lines to the code base, or even more files, if we use XML. And it doesn’t add anything, except an additional complexity. We should always remember this if we have the question: “What database is used as an argument of a Budget?”
The Right Way
Now, let me show you a real life example of using new to construct an application. This is how we create a “thinking engine” in rultor.com (full class is in Agents.java):
Impressive? This is a true object composition. I believe this is how a proper object-oriented application should be instantiated.
And DI containers? In my opinion, they just add unnecessary noise.
"/>
https://www.yegor256.com/2014/10/03/di-containers-are-evil.html
Dependency Injection Containers are Code Polluters
- Yegor Bugayenko
- comments
While dependency injection (aka, “DI”) is a natural technique of composing objects in OOP (known long before the term was introduced by Martin Fowler), Spring IoC, Google Guice, Java EE6 CDI, Dagger and other DI frameworks turn it into an anti-pattern.
I’m not going to discuss obvious arguments against “setter injections” (like in Spring IoC) and “field injections” (like in PicoContainer). These mechanisms simply violate basic principles of object-oriented programming and encourage us to create incomplete, mutable objects, that get stuffed with data during the course of application execution. Remember: ideal objects must be immutable and may not contain setters.
Instead, let’s talk about “constructor injection” (like in Google Guice) and its use with dependency injection containers. I’ll try to show why I consider these containers a redundancy, at least.
What is Dependency Injection?
This is what dependency injection is (not really different from a plain old object composition):
public class Budget {
private final DB db;
public Budget(DB data) {
this.db = data;
}
public long total() {
return this.db.cell(
"SELECT SUM(cost) FROM ledger"
);
}
}The object data is called a “dependency.”
A Budget doesn’t know what kind of database it is working with. All it needs from the database is its ability to fetch a cell, using an arbitrary SQL query, via method cell(). We can instantiate a Budget with a PostgreSQL implementation of the DB interface, for example:
public class App {
public static void main(String... args) {
Budget budget = new Budget(
new Postgres("jdbc:postgresql:5740/main")
);
System.out.println("Total is: " + budget.total());
}
}In other words, we’re “injecting” a dependency into a new object budget.
An alternative to this “dependency injection” approach would be to let Budget decide what database it wants to work with:
public class Budget {
private final DB db =
new Postgres("jdbc:postgresql:5740/main");
// class methods
}This is very dirty and leads to 1) code duplication, 2) inability to reuse, and 3) inability to test, etc. No need to discuss why. It’s obvious.
Thus, dependency injection via a constructor is an amazing technique. Well, not even a technique, really. More like a feature of Java and all other object-oriented languages. It’s expected that almost any object will want to encapsulate some knowledge (aka, a “state”). That’s what constructors are for.
What is a DI Container?
So far so good, but here comes the dark side—a dependency injection container. Here is how it works (let’s use Google Guice as an example):
import javax.inject.Inject;
public class Budget {
private final DB db;
@Inject
public Budget(DB data) {
this.db = data;
}
// same methods as above
}Pay attention: the constructor is annotated with @Inject.
Then, we’re supposed to configure a container somewhere, when the application starts:
Injector injector = Guice.createInjector(
new AbstractModule() {
@Override
public void configure() {
this.bind(DB.class).toInstance(
new Postgres("jdbc:postgresql:5740/main")
);
}
}
);Some frameworks even allow us to configure the injector in an XML file.
From now on, we are not allowed to instantiate Budget through the new operator, like we did before. Instead, we should use the injector we just created:
public class App {
public static void main(String... args) {
Injection injector = // as we just did in the previous snippet
Budget budget = injector.getInstance(Budget.class);
System.out.println("Total is: " + budget.total());
}
}The injection automatically finds out that in order to instantiate a Budget it has to provide an argument for its constructor. It will use an instance of class Postgres, which we instantiated in the injector.
This is the right and recommended way to use Guice. There are a few even darker patterns, though, which are possible but not recommended. For example, you can make your injector a singleton and use it right inside the Budget class. These mechanisms are considered wrong even by DI container makers, however, so let’s ignore them and focus on the recommended scenario.
What Is This For?
Let me reiterate and summarize the scenarios of incorrect usage of dependency injection containers:
Field injection
Setter injection
Passing injector as a dependency
Making injector a global singleton
If we put all of them aside, all we have left is the constructor injection explained above. And how does that help us? Why do we need it? Why can’t we use plain old new in the main class of the application?
The container we created simply adds more lines to the code base, or even more files, if we use XML. And it doesn’t add anything, except an additional complexity. We should always remember this if we have the question: “What database is used as an argument of a Budget?”
The Right Way
Now, let me show you a real life example of using new to construct an application. This is how we create a “thinking engine” in rultor.com (full class is in Agents.java):
Impressive? This is a true object composition. I believe this is how a proper object-oriented application should be instantiated.
And DI containers? In my opinion, they just add unnecessary noise.
While dependency injection (aka, “DI”) is a natural technique of composing objects in OOP (known long before the term was introduced by Martin Fowler), Spring IoC, Google Guice, Java EE6 CDI, Dagger and other DI frameworks turn it into an anti-pattern.
I’m not going to discuss obvious arguments against “setter injections” (like in Spring IoC) and “field injections” (like in PicoContainer). These mechanisms simply violate basic principles of object-oriented programming and encourage us to create incomplete, mutable objects, that get stuffed with data during the course of application execution. Remember: ideal objects must be immutable and may not contain setters.
Instead, let’s talk about “constructor injection” (like in Google Guice) and its use with dependency injection containers. I’ll try to show why I consider these containers a redundancy, at least.
What is Dependency Injection?
This is what dependency injection is (not really different from a plain old object composition):
public class Budget {
private final DB db;
public Budget(DB data) {
this.db = data;
}
public long total() {
return this.db.cell(
"SELECT SUM(cost) FROM ledger"
);
}
}The object data is called a “dependency.”
A Budget doesn’t know what kind of database it is working with. All it needs from the database is its ability to fetch a cell, using an arbitrary SQL query, via method cell(). We can instantiate a Budget with a PostgreSQL implementation of the DB interface, for example:
public class App {
public static void main(String... args) {
Budget budget = new Budget(
new Postgres("jdbc:postgresql:5740/main")
);
System.out.println("Total is: " + budget.total());
}
}In other words, we’re “injecting” a dependency into a new object budget.
An alternative to this “dependency injection” approach would be to let Budget decide what database it wants to work with:
public class Budget {
private final DB db =
new Postgres("jdbc:postgresql:5740/main");
// class methods
}This is very dirty and leads to 1) code duplication, 2) inability to reuse, and 3) inability to test, etc. No need to discuss why. It’s obvious.
Thus, dependency injection via a constructor is an amazing technique. Well, not even a technique, really. More like a feature of Java and all other object-oriented languages. It’s expected that almost any object will want to encapsulate some knowledge (aka, a “state”). That’s what constructors are for.
What is a DI Container?
So far so good, but here comes the dark side—a dependency injection container. Here is how it works (let’s use Google Guice as an example):
import javax.inject.Inject;
public class Budget {
private final DB db;
@Inject
public Budget(DB data) {
this.db = data;
}
// same methods as above
}Pay attention: the constructor is annotated with @Inject.
Then, we’re supposed to configure a container somewhere, when the application starts:
Injector injector = Guice.createInjector(
new AbstractModule() {
@Override
public void configure() {
this.bind(DB.class).toInstance(
new Postgres("jdbc:postgresql:5740/main")
);
}
}
);Some frameworks even allow us to configure the injector in an XML file.
From now on, we are not allowed to instantiate Budget through the new operator, like we did before. Instead, we should use the injector we just created:
public class App {
public static void main(String... args) {
Injection injector = // as we just did in the previous snippet
Budget budget = injector.getInstance(Budget.class);
System.out.println("Total is: " + budget.total());
}
}The injection automatically finds out that in order to instantiate a Budget it has to provide an argument for its constructor. It will use an instance of class Postgres, which we instantiated in the injector.
This is the right and recommended way to use Guice. There are a few even darker patterns, though, which are possible but not recommended. For example, you can make your injector a singleton and use it right inside the Budget class. These mechanisms are considered wrong even by DI container makers, however, so let’s ignore them and focus on the recommended scenario.
What Is This For?
Let me reiterate and summarize the scenarios of incorrect usage of dependency injection containers:
Field injection
Setter injection
Passing injector as a dependency
Making injector a global singleton
If we put all of them aside, all we have left is the constructor injection explained above. And how does that help us? Why do we need it? Why can’t we use plain old new in the main class of the application?
The container we created simply adds more lines to the code base, or even more files, if we use XML. And it doesn’t add anything, except an additional complexity. We should always remember this if we have the question: “What database is used as an argument of a Budget?”
The Right Way
Now, let me show you a real life example of using new to construct an application. This is how we create a “thinking engine” in rultor.com (full class is in Agents.java):
Impressive? This is a true object composition. I believe this is how a proper object-oriented application should be instantiated.
And DI containers? In my opinion, they just add unnecessary noise.
Please, use syntax highlighting in your comments, to make them more readable.
The gist of the following text is this: getters and setters is a terrible practice and those who use it can’t be excused. Again, to avoid any misunderstanding, I’m not saying that get/set should be avoided when possible. No. I’m saying that you should never have them near your code.

Arrogant enough to catch your attention? You’ve been using that get/set pattern for 15 years and you’re a respected Java architect? And you don’t want to hear that nonsense from a stranger? Well, I understand your feelings. I felt almost the same when I stumbled upon Object Thinking by David West, the best book about object-oriented programming I’ve read so far. So please. Calm down and try to understand while I try to explain.
Existing Arguments

There are a few arguments against “accessors” (another name for getters and setters), in an object-oriented world. All of them, I think, are not strong enough. Let’s briefly go through them.
Tell, Don’t Ask Allen Holub says, “Don’t ask for the information you need to do the work; ask the object that has the information to do the work for you.”
Violated Encapsulation Principle An object can be teared apart by other objects, since they are able to inject any new data into it, through setters. The object simply can’t encapsulate its own state safely enough, since anyone can alter it.
Exposed Implementation Details If we can get an object out of another object, we are relying too much on the first object’s implementation details. If tomorrow it will change, say, the type of that result, we have to change our code as well.
All these justifications are reasonable, but they are missing the main point.
Fundamental Disbelief
Most programmers believe that an object is a data structure with methods. I’m quoting Getters and Setters Are Not Evil, an article by Bozhidar Bozhanov:
But the majority of objects for which people generate getters and setters are simple data holders.
This misconception is the consequence of a huge misunderstanding! Objects are not “simple data holders.” Objects are not data structures with attached methods. This “data holder” concept came to object-oriented programming from procedural languages, especially C and COBOL. I’ll say it again: an object is not a set of data elements and functions that manipulate them. An object is not a data entity.
What is it then?
A Ball and A Dog
In true object-oriented programming, objects are living creatures, like you and me. They are living organisms, with their own behavior, properties and a life cycle.
Can a living organism have a setter? Can you “set” a ball to a dog? Not really. But that is exactly what the following piece of software is doing:
Dog dog = new Dog();
dog.setBall(new Ball());How does that sound?
Can you get a ball from a dog? Well, you probably can, if she ate it and you’re doing surgery. In that case, yes, we can “get” a ball from a dog. This is what I’m talking about:
Dog dog = new Dog();
Ball ball = dog.getBall();Or an even more ridiculous example:
Dog dog = new Dog();
dog.setWeight("23kg");Can you imagine this transaction in the real world? :)
Does it look similar to what you’re writing every day? If yes, then you’re a procedural programmer. Admit it. And this is what David West has to say about it, on page 30 of his book:
Step one in the transformation of a successful procedural developer into a successful object developer is a lobotomy.
Do you need a lobotomy? Well, I definitely needed one and received it, while reading West’s Object Thinking.
Object Thinking
Start thinking like an object and you will immediately rename those methods. This is what you will probably get:
Dog dog = new Dog();
dog.take(new Ball());
Ball ball = dog.give();Now, we’re treating the dog as a real animal, who can take a ball from us and can give it back, when we ask. Worth mentioning is that the dog can’t give NULL back. Dogs simply don’t know what NULL is :) Object thinking immediately eliminates NULL references from your code.
Besides that, object thinking will lead to object immutability, like in the “weight of the dog” example. You would re-write that like this instead:
Dog dog = new Dog("23kg");
int weight = dog.weight();The dog is an immutable living organism, which doesn’t allow anyone from the outside to change her weight, or size, or name, etc. She can tell, on request, her weight or name. There is nothing wrong with public methods that demonstrate requests for certain “insides” of an object. But these methods are not “getters” and they should never have the “get” prefix. We’re not “getting” anything from the dog. We’re not getting her name. We’re asking her to tell us her name. See the difference?
We’re not talking semantics here, either. We are differentiating the procedural programming mindset from an object-oriented one. In procedural programming, we’re working with data, manipulating them, getting, setting, and deleting when necessary. We’re in charge, and the data is just a passive component. The dog is nothing to us—it’s just a “data holder.” It doesn’t have its own life. We are free to get whatever is necessary from it and set any data into it. This is how C, COBOL, Pascal and many other procedural languages work(ed).
On the contrary, in a true object-oriented world, we treat objects like living organisms, with their own date of birth and a moment of death—with their own identity and habits, if you wish. We can ask a dog to give us some piece of data (for example, her weight), and she may return us that information. But we always remember that the dog is an active component. She decides what will happen after our request.
That’s why, it is conceptually incorrect to have any methods starting with set or get in an object. And it’s not about breaking encapsulation, like many people argue. It is whether you’re thinking like an object or you’re still writing COBOL in Java syntax.
PS. Yes, you may ask,—what about JavaBeans, JPA, JAXB, and many other Java API-s that rely on the get/set notation? What about Ruby’s built-in feature that simplifies the creation of accessors? Well, all of that is our misfortune. It is much easier to stay in a primitive world of procedural COBOL than to truly understand and appreciate the beautiful world of true objects.
PPS. Forgot to say, yes, dependency injection via setters is also a terrible anti-pattern. About it, in one of the next posts :)
PPPS. Here is what I’m suggesting to use instead of getters: printers.
" /> Why getter and setter methods are evil famous article, about whether getters/setters is an anti-pattern and should be avoided or if it is something we inevitably need in object-oriented programming. I’ll try to add my two cents to this discussion.The gist of the following text is this: getters and setters is a terrible practice and those who use it can’t be excused. Again, to avoid any misunderstanding, I’m not saying that get/set should be avoided when possible. No. I’m saying that you should never have them near your code.

Arrogant enough to catch your attention? You’ve been using that get/set pattern for 15 years and you’re a respected Java architect? And you don’t want to hear that nonsense from a stranger? Well, I understand your feelings. I felt almost the same when I stumbled upon Object Thinking by David West, the best book about object-oriented programming I’ve read so far. So please. Calm down and try to understand while I try to explain.
Existing Arguments

There are a few arguments against “accessors” (another name for getters and setters), in an object-oriented world. All of them, I think, are not strong enough. Let’s briefly go through them.
Tell, Don’t Ask Allen Holub says, “Don’t ask for the information you need to do the work; ask the object that has the information to do the work for you.”
Violated Encapsulation Principle An object can be teared apart by other objects, since they are able to inject any new data into it, through setters. The object simply can’t encapsulate its own state safely enough, since anyone can alter it.
Exposed Implementation Details If we can get an object out of another object, we are relying too much on the first object’s implementation details. If tomorrow it will change, say, the type of that result, we have to change our code as well.
All these justifications are reasonable, but they are missing the main point.
Fundamental Disbelief
Most programmers believe that an object is a data structure with methods. I’m quoting Getters and Setters Are Not Evil, an article by Bozhidar Bozhanov:
But the majority of objects for which people generate getters and setters are simple data holders.
This misconception is the consequence of a huge misunderstanding! Objects are not “simple data holders.” Objects are not data structures with attached methods. This “data holder” concept came to object-oriented programming from procedural languages, especially C and COBOL. I’ll say it again: an object is not a set of data elements and functions that manipulate them. An object is not a data entity.
What is it then?
A Ball and A Dog
In true object-oriented programming, objects are living creatures, like you and me. They are living organisms, with their own behavior, properties and a life cycle.
Can a living organism have a setter? Can you “set” a ball to a dog? Not really. But that is exactly what the following piece of software is doing:
Dog dog = new Dog();
dog.setBall(new Ball());How does that sound?
Can you get a ball from a dog? Well, you probably can, if she ate it and you’re doing surgery. In that case, yes, we can “get” a ball from a dog. This is what I’m talking about:
Dog dog = new Dog();
Ball ball = dog.getBall();Or an even more ridiculous example:
Dog dog = new Dog();
dog.setWeight("23kg");Can you imagine this transaction in the real world? :)
Does it look similar to what you’re writing every day? If yes, then you’re a procedural programmer. Admit it. And this is what David West has to say about it, on page 30 of his book:
Step one in the transformation of a successful procedural developer into a successful object developer is a lobotomy.
Do you need a lobotomy? Well, I definitely needed one and received it, while reading West’s Object Thinking.
Object Thinking
Start thinking like an object and you will immediately rename those methods. This is what you will probably get:
Dog dog = new Dog();
dog.take(new Ball());
Ball ball = dog.give();Now, we’re treating the dog as a real animal, who can take a ball from us and can give it back, when we ask. Worth mentioning is that the dog can’t give NULL back. Dogs simply don’t know what NULL is :) Object thinking immediately eliminates NULL references from your code.
Besides that, object thinking will lead to object immutability, like in the “weight of the dog” example. You would re-write that like this instead:
Dog dog = new Dog("23kg");
int weight = dog.weight();The dog is an immutable living organism, which doesn’t allow anyone from the outside to change her weight, or size, or name, etc. She can tell, on request, her weight or name. There is nothing wrong with public methods that demonstrate requests for certain “insides” of an object. But these methods are not “getters” and they should never have the “get” prefix. We’re not “getting” anything from the dog. We’re not getting her name. We’re asking her to tell us her name. See the difference?
We’re not talking semantics here, either. We are differentiating the procedural programming mindset from an object-oriented one. In procedural programming, we’re working with data, manipulating them, getting, setting, and deleting when necessary. We’re in charge, and the data is just a passive component. The dog is nothing to us—it’s just a “data holder.” It doesn’t have its own life. We are free to get whatever is necessary from it and set any data into it. This is how C, COBOL, Pascal and many other procedural languages work(ed).
On the contrary, in a true object-oriented world, we treat objects like living organisms, with their own date of birth and a moment of death—with their own identity and habits, if you wish. We can ask a dog to give us some piece of data (for example, her weight), and she may return us that information. But we always remember that the dog is an active component. She decides what will happen after our request.
That’s why, it is conceptually incorrect to have any methods starting with set or get in an object. And it’s not about breaking encapsulation, like many people argue. It is whether you’re thinking like an object or you’re still writing COBOL in Java syntax.
PS. Yes, you may ask,—what about JavaBeans, JPA, JAXB, and many other Java API-s that rely on the get/set notation? What about Ruby’s built-in feature that simplifies the creation of accessors? Well, all of that is our misfortune. It is much easier to stay in a primitive world of procedural COBOL than to truly understand and appreciate the beautiful world of true objects.
PPS. Forgot to say, yes, dependency injection via setters is also a terrible anti-pattern. About it, in one of the next posts :)
PPPS. Here is what I’m suggesting to use instead of getters: printers.
"/>
https://www.yegor256.com/2014/09/16/getters-and-setters-are-evil.html
Getters/Setters. Evil. Period.
- Yegor Bugayenko
- comments
- Translated:
- Japanese
- Russian
- Polish
- add yours!
- Discussed at:
There is an old debate, started in 2003 by Allen Holub in this Why getter and setter methods are evil famous article, about whether getters/setters is an anti-pattern and should be avoided or if it is something we inevitably need in object-oriented programming. I’ll try to add my two cents to this discussion.
The gist of the following text is this: getters and setters is a terrible practice and those who use it can’t be excused. Again, to avoid any misunderstanding, I’m not saying that get/set should be avoided when possible. No. I’m saying that you should never have them near your code.

Arrogant enough to catch your attention? You’ve been using that get/set pattern for 15 years and you’re a respected Java architect? And you don’t want to hear that nonsense from a stranger? Well, I understand your feelings. I felt almost the same when I stumbled upon Object Thinking by David West, the best book about object-oriented programming I’ve read so far. So please. Calm down and try to understand while I try to explain.
Existing Arguments

There are a few arguments against “accessors” (another name for getters and setters), in an object-oriented world. All of them, I think, are not strong enough. Let’s briefly go through them.
Tell, Don’t Ask Allen Holub says, “Don’t ask for the information you need to do the work; ask the object that has the information to do the work for you.”
Violated Encapsulation Principle An object can be teared apart by other objects, since they are able to inject any new data into it, through setters. The object simply can’t encapsulate its own state safely enough, since anyone can alter it.
Exposed Implementation Details If we can get an object out of another object, we are relying too much on the first object’s implementation details. If tomorrow it will change, say, the type of that result, we have to change our code as well.
All these justifications are reasonable, but they are missing the main point.
Fundamental Disbelief
Most programmers believe that an object is a data structure with methods. I’m quoting Getters and Setters Are Not Evil, an article by Bozhidar Bozhanov:
But the majority of objects for which people generate getters and setters are simple data holders.
This misconception is the consequence of a huge misunderstanding! Objects are not “simple data holders.” Objects are not data structures with attached methods. This “data holder” concept came to object-oriented programming from procedural languages, especially C and COBOL. I’ll say it again: an object is not a set of data elements and functions that manipulate them. An object is not a data entity.
What is it then?
A Ball and A Dog
In true object-oriented programming, objects are living creatures, like you and me. They are living organisms, with their own behavior, properties and a life cycle.
Can a living organism have a setter? Can you “set” a ball to a dog? Not really. But that is exactly what the following piece of software is doing:
Dog dog = new Dog();
dog.setBall(new Ball());How does that sound?
Can you get a ball from a dog? Well, you probably can, if she ate it and you’re doing surgery. In that case, yes, we can “get” a ball from a dog. This is what I’m talking about:
Dog dog = new Dog();
Ball ball = dog.getBall();Or an even more ridiculous example:
Dog dog = new Dog();
dog.setWeight("23kg");Can you imagine this transaction in the real world? :)
Does it look similar to what you’re writing every day? If yes, then you’re a procedural programmer. Admit it. And this is what David West has to say about it, on page 30 of his book:
Step one in the transformation of a successful procedural developer into a successful object developer is a lobotomy.
Do you need a lobotomy? Well, I definitely needed one and received it, while reading West’s Object Thinking.
Object Thinking
Start thinking like an object and you will immediately rename those methods. This is what you will probably get:
Dog dog = new Dog();
dog.take(new Ball());
Ball ball = dog.give();Now, we’re treating the dog as a real animal, who can take a ball from us and can give it back, when we ask. Worth mentioning is that the dog can’t give NULL back. Dogs simply don’t know what NULL is :) Object thinking immediately eliminates NULL references from your code.
Besides that, object thinking will lead to object immutability, like in the “weight of the dog” example. You would re-write that like this instead:
Dog dog = new Dog("23kg");
int weight = dog.weight();The dog is an immutable living organism, which doesn’t allow anyone from the outside to change her weight, or size, or name, etc. She can tell, on request, her weight or name. There is nothing wrong with public methods that demonstrate requests for certain “insides” of an object. But these methods are not “getters” and they should never have the “get” prefix. We’re not “getting” anything from the dog. We’re not getting her name. We’re asking her to tell us her name. See the difference?
We’re not talking semantics here, either. We are differentiating the procedural programming mindset from an object-oriented one. In procedural programming, we’re working with data, manipulating them, getting, setting, and deleting when necessary. We’re in charge, and the data is just a passive component. The dog is nothing to us—it’s just a “data holder.” It doesn’t have its own life. We are free to get whatever is necessary from it and set any data into it. This is how C, COBOL, Pascal and many other procedural languages work(ed).
On the contrary, in a true object-oriented world, we treat objects like living organisms, with their own date of birth and a moment of death—with their own identity and habits, if you wish. We can ask a dog to give us some piece of data (for example, her weight), and she may return us that information. But we always remember that the dog is an active component. She decides what will happen after our request.
That’s why, it is conceptually incorrect to have any methods starting with set or get in an object. And it’s not about breaking encapsulation, like many people argue. It is whether you’re thinking like an object or you’re still writing COBOL in Java syntax.
PS. Yes, you may ask,—what about JavaBeans, JPA, JAXB, and many other Java API-s that rely on the get/set notation? What about Ruby’s built-in feature that simplifies the creation of accessors? Well, all of that is our misfortune. It is much easier to stay in a primitive world of procedural COBOL than to truly understand and appreciate the beautiful world of true objects.
PPS. Forgot to say, yes, dependency injection via setters is also a terrible anti-pattern. About it, in one of the next posts :)
PPPS. Here is what I’m suggesting to use instead of getters: printers.
There is an old debate, started in 2003 by Allen Holub in this Why getter and setter methods are evil famous article, about whether getters/setters is an anti-pattern and should be avoided or if it is something we inevitably need in object-oriented programming. I’ll try to add my two cents to this discussion.
The gist of the following text is this: getters and setters is a terrible practice and those who use it can’t be excused. Again, to avoid any misunderstanding, I’m not saying that get/set should be avoided when possible. No. I’m saying that you should never have them near your code.

Arrogant enough to catch your attention? You’ve been using that get/set pattern for 15 years and you’re a respected Java architect? And you don’t want to hear that nonsense from a stranger? Well, I understand your feelings. I felt almost the same when I stumbled upon Object Thinking by David West, the best book about object-oriented programming I’ve read so far. So please. Calm down and try to understand while I try to explain.
Existing Arguments

There are a few arguments against “accessors” (another name for getters and setters), in an object-oriented world. All of them, I think, are not strong enough. Let’s briefly go through them.
Tell, Don’t Ask Allen Holub says, “Don’t ask for the information you need to do the work; ask the object that has the information to do the work for you.”
Violated Encapsulation Principle An object can be teared apart by other objects, since they are able to inject any new data into it, through setters. The object simply can’t encapsulate its own state safely enough, since anyone can alter it.
Exposed Implementation Details If we can get an object out of another object, we are relying too much on the first object’s implementation details. If tomorrow it will change, say, the type of that result, we have to change our code as well.
All these justifications are reasonable, but they are missing the main point.
Fundamental Disbelief
Most programmers believe that an object is a data structure with methods. I’m quoting Getters and Setters Are Not Evil, an article by Bozhidar Bozhanov:
But the majority of objects for which people generate getters and setters are simple data holders.
This misconception is the consequence of a huge misunderstanding! Objects are not “simple data holders.” Objects are not data structures with attached methods. This “data holder” concept came to object-oriented programming from procedural languages, especially C and COBOL. I’ll say it again: an object is not a set of data elements and functions that manipulate them. An object is not a data entity.
What is it then?
A Ball and A Dog
In true object-oriented programming, objects are living creatures, like you and me. They are living organisms, with their own behavior, properties and a life cycle.
Can a living organism have a setter? Can you “set” a ball to a dog? Not really. But that is exactly what the following piece of software is doing:
Dog dog = new Dog();
dog.setBall(new Ball());How does that sound?
Can you get a ball from a dog? Well, you probably can, if she ate it and you’re doing surgery. In that case, yes, we can “get” a ball from a dog. This is what I’m talking about:
Dog dog = new Dog();
Ball ball = dog.getBall();Or an even more ridiculous example:
Dog dog = new Dog();
dog.setWeight("23kg");Can you imagine this transaction in the real world? :)
Does it look similar to what you’re writing every day? If yes, then you’re a procedural programmer. Admit it. And this is what David West has to say about it, on page 30 of his book:
Step one in the transformation of a successful procedural developer into a successful object developer is a lobotomy.
Do you need a lobotomy? Well, I definitely needed one and received it, while reading West’s Object Thinking.
Object Thinking
Start thinking like an object and you will immediately rename those methods. This is what you will probably get:
Dog dog = new Dog();
dog.take(new Ball());
Ball ball = dog.give();Now, we’re treating the dog as a real animal, who can take a ball from us and can give it back, when we ask. Worth mentioning is that the dog can’t give NULL back. Dogs simply don’t know what NULL is :) Object thinking immediately eliminates NULL references from your code.
Besides that, object thinking will lead to object immutability, like in the “weight of the dog” example. You would re-write that like this instead:
Dog dog = new Dog("23kg");
int weight = dog.weight();The dog is an immutable living organism, which doesn’t allow anyone from the outside to change her weight, or size, or name, etc. She can tell, on request, her weight or name. There is nothing wrong with public methods that demonstrate requests for certain “insides” of an object. But these methods are not “getters” and they should never have the “get” prefix. We’re not “getting” anything from the dog. We’re not getting her name. We’re asking her to tell us her name. See the difference?
We’re not talking semantics here, either. We are differentiating the procedural programming mindset from an object-oriented one. In procedural programming, we’re working with data, manipulating them, getting, setting, and deleting when necessary. We’re in charge, and the data is just a passive component. The dog is nothing to us—it’s just a “data holder.” It doesn’t have its own life. We are free to get whatever is necessary from it and set any data into it. This is how C, COBOL, Pascal and many other procedural languages work(ed).
On the contrary, in a true object-oriented world, we treat objects like living organisms, with their own date of birth and a moment of death—with their own identity and habits, if you wish. We can ask a dog to give us some piece of data (for example, her weight), and she may return us that information. But we always remember that the dog is an active component. She decides what will happen after our request.
That’s why, it is conceptually incorrect to have any methods starting with set or get in an object. And it’s not about breaking encapsulation, like many people argue. It is whether you’re thinking like an object or you’re still writing COBOL in Java syntax.
PS. Yes, you may ask,—what about JavaBeans, JPA, JAXB, and many other Java API-s that rely on the get/set notation? What about Ruby’s built-in feature that simplifies the creation of accessors? Well, all of that is our misfortune. It is much easier to stay in a primitive world of procedural COBOL than to truly understand and appreciate the beautiful world of true objects.
PPS. Forgot to say, yes, dependency injection via setters is also a terrible anti-pattern. About it, in one of the next posts :)
PPPS. Here is what I’m suggesting to use instead of getters: printers.
Please, use syntax highlighting in your comments, to make them more readable.
Avoid them at all cost. Check this list and this one too.
" /> NULL ReferencesAvoid them at all cost. Check this list and this one too.
"/>
https://www.yegor256.com/2014/09/10/anti-patterns-in-oop.html
Anti-Patterns in OOP
- Yegor Bugayenko
- comments
Here they come:
Here they come:
Please, use syntax highlighting in your comments, to make them more readable.
String. Once created, we can’t modify its state. We can request that it creates new strings, but its own state will never change.However, there are not so many immutable classes in JDK. Take, for example, class Date. It is possible to modify its state using setTime().
I don’t know why the JDK designers decided to make these two very similar classes differently. However, I believe that the design of a mutable Date has many flaws, while the immutable String is much more in the spirit of the object-oriented paradigm.
Moreover, I think that all classes should be immutable in a perfect object-oriented world. Unfortunately, sometimes, it is technically not possible due to limitations in JVM. Nevertheless, we should always aim for the best.
This is an incomplete list of arguments in favor of immutability:
- immutable objects are simpler to construct, test, and use
- truly immutable objects are always thread-safe
- they help to avoid temporal coupling
- their usage is side-effect free (no defensive copies)
- identity mutability problem is avoided
- they always have failure atomicity
- they are much easier to cache
- they prevent NULL references, which are bad
Let’s discuss the most important arguments one by one.
Thread Safety
The first and the most obvious argument is that immutable objects are thread-safe. This means that multiple threads can access the same object at the same time, without clashing with another thread.

If no object methods can modify its state, no matter how many of them and how often are being called parallel—they will work in their own memory space in stack.
Goetz et al. explained the advantages of immutable objects in more details in their very famous book Java Concurrency in Practice (highly recommended).
Avoiding Temporal Coupling
Here is an example of temporal coupling (the code makes two consecutive HTTP POST requests, where the second one contains HTTP body):
Request request = new Request("http://localhost");
request.method("POST");
String first = request.fetch();
request.body("text=hello");
String second = request.fetch();This code works. However, you must remember that the first request should be configured before the second one may happen. If we decide to remove the first request from the script, we will remove the second and the third line, and won’t get any errors from the compiler:
Request request = new Request("http://localhost");
// request.method("POST");
// String first = request.fetch();
request.body("text=hello");
String second = request.fetch();Now, the script is broken although it compiled without errors. This is what temporal coupling is about—there is always some hidden information in the code that a programmer has to remember. In this example, we have to remember that the configuration for the first request is also used for the second one.
We have to remember that the second request should always stay together and be executed after the first one.
If Request class were immutable, the first snippet wouldn’t work in the first place, and would have been rewritten like:
final Request request = new Request("");
String first = request.method("POST").fetch();
String second = request.method("POST").body("text=hello").fetch();Now, these two requests are not coupled. We can safely remove the first one, and the second one will still work correctly. You may point out that there is a code duplication. Yes, we should get rid of it and re-write the code:
final Request request = new Request("");
final Request post = request.method("POST");
String first = post.fetch();
String second = post.body("text=hello").fetch();See, refactoring didn’t break anything and we still don’t have temporal coupling. The first request can be removed safely from the code without affecting the second one.
I hope this example demonstrates that the code manipulating immutable objects is more readable and maintainable, because it doesn’t have temporal coupling.
Avoiding Side Effects
Let’s try to use our Request class in a new method (now it is mutable):
public String post(Request request) {
request.method("POST");
return request.fetch();
}Let’s try to make two requests—the first with GET method and the second with POST:
Request request = new Request("http://localhost");
request.method("GET");
String first = this.post(request);
String second = request.fetch();Method post() has a “side effect”—it makes changes to the mutable object request. These changes are not really expected in this case. We expect it to make a POST request and return its body. We don’t want to read its documentation just to find out that behind the scene it also modifies the request we’re passing to it as an argument.
Needless to say, such side effects lead to bugs and maintainability issues. It would be much better to work with an immutable Request:
public String post(Request request) {
return request.method("POST").fetch();
}In this case, we may not have any side effects. Nobody can modify our request object, no matter where it is used and how deep through the call stack it is passed by method calls:
Request request = new Request("http://localhost").method("GET");
String first = this.post(request);
String second = request.fetch();This code is perfectly safe and side effect free.
Avoiding Identity Mutability
Very often, we want objects to be identical if their internal states are the same. Date class is a good example:
Date first = new Date(1L);
Date second = new Date(1L);
assert first.equals(second); // trueThere are two different objects; however, they are equal to each other because their encapsulated states are the same. This is made possible through their custom overloaded implementation of equals() and hashCode() methods.
The consequence of this convenient approach being used with mutable objects is that every time we modify object’s state it changes its identity:
Date first = new Date(1L);
Date second = new Date(1L);
first.setTime(2L);
assert first.equals(second); // falseThis may look natural, until you start using your mutable objects as keys in maps:
Map<Date, String> map = new HashMap<>();
Date date = new Date();
map.put(date, "hello, world!");
date.setTime(12345L);
assert map.containsKey(date); // falseWhen modifying the state of date object, we’re not expecting it to change its identity. We’re not expecting to lose an entry in the map just because the state of its key is changed. However, this is exactly what is happening in the example above.
When we add an object to the map, its hashCode() returns one value. This value is used by HashMap to place the entry into the internal hash table. When we call containsKey() hash code of the object is different (because it is based on its internal state) and HashMap can’t find it in the internal hash table.
It is a very annoying and difficult to debug side effects of mutable objects. Immutable objects avoid it completely.
Failure Atomicity
Here is a simple example:
public class Stack {
private int size;
private String[] items;
public void push(String item) {
size++;
if (size > items.length) {
throw new RuntimeException("stack overflow");
}
items[size] = item;
}
}It is obvious that an object of class Stack will be left in a broken state if it throws a runtime exception on overflow. Its size property will be incremented, while items won’t get a new element.

Immutability prevents this problem. An object will never be left in a broken state because its state is modified only in its constructor. The constructor will either fail, rejecting object instantiation, or succeed, making a valid solid object, which never changes its encapsulated state.
For more on this subject, read Effective Java, 2nd Edition by Joshua Bloch.
Arguments Against Immutability
There are a number of arguments against immutability.
“Immutability is not for enterprise systems”. Very often, I hear people say that immutability is a fancy feature, while absolutely impractical in real enterprise systems. As a counter-argument, I can only show some examples of real-life applications that contain only immutable Java objects:
jcabi-http,jcabi-xml,jcabi-github,jcabi-s3,jcabi-dynamo,jcabi-w3c,jcabi-jdbc,jcabi-simpledb,jcabi-ssh. The above are all Java libraries that work solely with immutable classes/objects. netbout.com and stateful.co are web applications that work solely with immutable objects.“It’s cheaper to update an existing object than create a new one”. Oracle thinks that “The impact of object creation is often overestimated and can be offset by some of the efficiency associated with immutable objects. These include decreased overhead due to garbage collection, and the elimination of code needed to protect mutable objects from corruption.” I agree.
If you have some other arguments, please post them below and I’ll try to comment.
P.S. Check takes.org, a Java web framework that consists entirely of immutable objects.
" /> immutable if its state can’t be modified after it is created. In Java, a good example of an immutable object isString. Once created, we can’t modify its state. We can request that it creates new strings, but its own state will never change.However, there are not so many immutable classes in JDK. Take, for example, class Date. It is possible to modify its state using setTime().
I don’t know why the JDK designers decided to make these two very similar classes differently. However, I believe that the design of a mutable Date has many flaws, while the immutable String is much more in the spirit of the object-oriented paradigm.
Moreover, I think that all classes should be immutable in a perfect object-oriented world. Unfortunately, sometimes, it is technically not possible due to limitations in JVM. Nevertheless, we should always aim for the best.
This is an incomplete list of arguments in favor of immutability:
- immutable objects are simpler to construct, test, and use
- truly immutable objects are always thread-safe
- they help to avoid temporal coupling
- their usage is side-effect free (no defensive copies)
- identity mutability problem is avoided
- they always have failure atomicity
- they are much easier to cache
- they prevent NULL references, which are bad
Let’s discuss the most important arguments one by one.
Thread Safety
The first and the most obvious argument is that immutable objects are thread-safe. This means that multiple threads can access the same object at the same time, without clashing with another thread.

If no object methods can modify its state, no matter how many of them and how often are being called parallel—they will work in their own memory space in stack.
Goetz et al. explained the advantages of immutable objects in more details in their very famous book Java Concurrency in Practice (highly recommended).
Avoiding Temporal Coupling
Here is an example of temporal coupling (the code makes two consecutive HTTP POST requests, where the second one contains HTTP body):
Request request = new Request("http://localhost");
request.method("POST");
String first = request.fetch();
request.body("text=hello");
String second = request.fetch();This code works. However, you must remember that the first request should be configured before the second one may happen. If we decide to remove the first request from the script, we will remove the second and the third line, and won’t get any errors from the compiler:
Request request = new Request("http://localhost");
// request.method("POST");
// String first = request.fetch();
request.body("text=hello");
String second = request.fetch();Now, the script is broken although it compiled without errors. This is what temporal coupling is about—there is always some hidden information in the code that a programmer has to remember. In this example, we have to remember that the configuration for the first request is also used for the second one.
We have to remember that the second request should always stay together and be executed after the first one.
If Request class were immutable, the first snippet wouldn’t work in the first place, and would have been rewritten like:
final Request request = new Request("");
String first = request.method("POST").fetch();
String second = request.method("POST").body("text=hello").fetch();Now, these two requests are not coupled. We can safely remove the first one, and the second one will still work correctly. You may point out that there is a code duplication. Yes, we should get rid of it and re-write the code:
final Request request = new Request("");
final Request post = request.method("POST");
String first = post.fetch();
String second = post.body("text=hello").fetch();See, refactoring didn’t break anything and we still don’t have temporal coupling. The first request can be removed safely from the code without affecting the second one.
I hope this example demonstrates that the code manipulating immutable objects is more readable and maintainable, because it doesn’t have temporal coupling.
Avoiding Side Effects
Let’s try to use our Request class in a new method (now it is mutable):
public String post(Request request) {
request.method("POST");
return request.fetch();
}Let’s try to make two requests—the first with GET method and the second with POST:
Request request = new Request("http://localhost");
request.method("GET");
String first = this.post(request);
String second = request.fetch();Method post() has a “side effect”—it makes changes to the mutable object request. These changes are not really expected in this case. We expect it to make a POST request and return its body. We don’t want to read its documentation just to find out that behind the scene it also modifies the request we’re passing to it as an argument.
Needless to say, such side effects lead to bugs and maintainability issues. It would be much better to work with an immutable Request:
public String post(Request request) {
return request.method("POST").fetch();
}In this case, we may not have any side effects. Nobody can modify our request object, no matter where it is used and how deep through the call stack it is passed by method calls:
Request request = new Request("http://localhost").method("GET");
String first = this.post(request);
String second = request.fetch();This code is perfectly safe and side effect free.
Avoiding Identity Mutability
Very often, we want objects to be identical if their internal states are the same. Date class is a good example:
Date first = new Date(1L);
Date second = new Date(1L);
assert first.equals(second); // trueThere are two different objects; however, they are equal to each other because their encapsulated states are the same. This is made possible through their custom overloaded implementation of equals() and hashCode() methods.
The consequence of this convenient approach being used with mutable objects is that every time we modify object’s state it changes its identity:
Date first = new Date(1L);
Date second = new Date(1L);
first.setTime(2L);
assert first.equals(second); // falseThis may look natural, until you start using your mutable objects as keys in maps:
Map<Date, String> map = new HashMap<>();
Date date = new Date();
map.put(date, "hello, world!");
date.setTime(12345L);
assert map.containsKey(date); // falseWhen modifying the state of date object, we’re not expecting it to change its identity. We’re not expecting to lose an entry in the map just because the state of its key is changed. However, this is exactly what is happening in the example above.
When we add an object to the map, its hashCode() returns one value. This value is used by HashMap to place the entry into the internal hash table. When we call containsKey() hash code of the object is different (because it is based on its internal state) and HashMap can’t find it in the internal hash table.
It is a very annoying and difficult to debug side effects of mutable objects. Immutable objects avoid it completely.
Failure Atomicity
Here is a simple example:
public class Stack {
private int size;
private String[] items;
public void push(String item) {
size++;
if (size > items.length) {
throw new RuntimeException("stack overflow");
}
items[size] = item;
}
}It is obvious that an object of class Stack will be left in a broken state if it throws a runtime exception on overflow. Its size property will be incremented, while items won’t get a new element.

Immutability prevents this problem. An object will never be left in a broken state because its state is modified only in its constructor. The constructor will either fail, rejecting object instantiation, or succeed, making a valid solid object, which never changes its encapsulated state.
For more on this subject, read Effective Java, 2nd Edition by Joshua Bloch.
Arguments Against Immutability
There are a number of arguments against immutability.
“Immutability is not for enterprise systems”. Very often, I hear people say that immutability is a fancy feature, while absolutely impractical in real enterprise systems. As a counter-argument, I can only show some examples of real-life applications that contain only immutable Java objects:
jcabi-http,jcabi-xml,jcabi-github,jcabi-s3,jcabi-dynamo,jcabi-w3c,jcabi-jdbc,jcabi-simpledb,jcabi-ssh. The above are all Java libraries that work solely with immutable classes/objects. netbout.com and stateful.co are web applications that work solely with immutable objects.“It’s cheaper to update an existing object than create a new one”. Oracle thinks that “The impact of object creation is often overestimated and can be offset by some of the efficiency associated with immutable objects. These include decreased overhead due to garbage collection, and the elimination of code needed to protect mutable objects from corruption.” I agree.
If you have some other arguments, please post them below and I’ll try to comment.
P.S. Check takes.org, a Java web framework that consists entirely of immutable objects.
"/>
https://www.yegor256.com/2014/06/09/objects-should-be-immutable.html
Objects Should Be Immutable
- Yegor Bugayenko
- comments
- Discussed at:
In object-oriented programming, an object is immutable if its state can’t be modified after it is created. In Java, a good example of an immutable object is String. Once created, we can’t modify its state. We can request that it creates new strings, but its own state will never change.
However, there are not so many immutable classes in JDK. Take, for example, class Date. It is possible to modify its state using setTime().
I don’t know why the JDK designers decided to make these two very similar classes differently. However, I believe that the design of a mutable Date has many flaws, while the immutable String is much more in the spirit of the object-oriented paradigm.
Moreover, I think that all classes should be immutable in a perfect object-oriented world. Unfortunately, sometimes, it is technically not possible due to limitations in JVM. Nevertheless, we should always aim for the best.
This is an incomplete list of arguments in favor of immutability:
- immutable objects are simpler to construct, test, and use
- truly immutable objects are always thread-safe
- they help to avoid temporal coupling
- their usage is side-effect free (no defensive copies)
- identity mutability problem is avoided
- they always have failure atomicity
- they are much easier to cache
- they prevent NULL references, which are bad
Let’s discuss the most important arguments one by one.
Thread Safety
The first and the most obvious argument is that immutable objects are thread-safe. This means that multiple threads can access the same object at the same time, without clashing with another thread.

If no object methods can modify its state, no matter how many of them and how often are being called parallel—they will work in their own memory space in stack.
Goetz et al. explained the advantages of immutable objects in more details in their very famous book Java Concurrency in Practice (highly recommended).
Avoiding Temporal Coupling
Here is an example of temporal coupling (the code makes two consecutive HTTP POST requests, where the second one contains HTTP body):
Request request = new Request("http://localhost");
request.method("POST");
String first = request.fetch();
request.body("text=hello");
String second = request.fetch();This code works. However, you must remember that the first request should be configured before the second one may happen. If we decide to remove the first request from the script, we will remove the second and the third line, and won’t get any errors from the compiler:
Request request = new Request("http://localhost");
// request.method("POST");
// String first = request.fetch();
request.body("text=hello");
String second = request.fetch();Now, the script is broken although it compiled without errors. This is what temporal coupling is about—there is always some hidden information in the code that a programmer has to remember. In this example, we have to remember that the configuration for the first request is also used for the second one.
We have to remember that the second request should always stay together and be executed after the first one.
If Request class were immutable, the first snippet wouldn’t work in the first place, and would have been rewritten like:
final Request request = new Request("");
String first = request.method("POST").fetch();
String second = request.method("POST").body("text=hello").fetch();Now, these two requests are not coupled. We can safely remove the first one, and the second one will still work correctly. You may point out that there is a code duplication. Yes, we should get rid of it and re-write the code:
final Request request = new Request("");
final Request post = request.method("POST");
String first = post.fetch();
String second = post.body("text=hello").fetch();See, refactoring didn’t break anything and we still don’t have temporal coupling. The first request can be removed safely from the code without affecting the second one.
I hope this example demonstrates that the code manipulating immutable objects is more readable and maintainable, because it doesn’t have temporal coupling.
Avoiding Side Effects
Let’s try to use our Request class in a new method (now it is mutable):
public String post(Request request) {
request.method("POST");
return request.fetch();
}Let’s try to make two requests—the first with GET method and the second with POST:
Request request = new Request("http://localhost");
request.method("GET");
String first = this.post(request);
String second = request.fetch();Method post() has a “side effect”—it makes changes to the mutable object request. These changes are not really expected in this case. We expect it to make a POST request and return its body. We don’t want to read its documentation just to find out that behind the scene it also modifies the request we’re passing to it as an argument.
Needless to say, such side effects lead to bugs and maintainability issues. It would be much better to work with an immutable Request:
public String post(Request request) {
return request.method("POST").fetch();
}In this case, we may not have any side effects. Nobody can modify our request object, no matter where it is used and how deep through the call stack it is passed by method calls:
Request request = new Request("http://localhost").method("GET");
String first = this.post(request);
String second = request.fetch();This code is perfectly safe and side effect free.
Avoiding Identity Mutability
Very often, we want objects to be identical if their internal states are the same. Date class is a good example:
Date first = new Date(1L);
Date second = new Date(1L);
assert first.equals(second); // trueThere are two different objects; however, they are equal to each other because their encapsulated states are the same. This is made possible through their custom overloaded implementation of equals() and hashCode() methods.
The consequence of this convenient approach being used with mutable objects is that every time we modify object’s state it changes its identity:
Date first = new Date(1L);
Date second = new Date(1L);
first.setTime(2L);
assert first.equals(second); // falseThis may look natural, until you start using your mutable objects as keys in maps:
Map<Date, String> map = new HashMap<>();
Date date = new Date();
map.put(date, "hello, world!");
date.setTime(12345L);
assert map.containsKey(date); // falseWhen modifying the state of date object, we’re not expecting it to change its identity. We’re not expecting to lose an entry in the map just because the state of its key is changed. However, this is exactly what is happening in the example above.
When we add an object to the map, its hashCode() returns one value. This value is used by HashMap to place the entry into the internal hash table. When we call containsKey() hash code of the object is different (because it is based on its internal state) and HashMap can’t find it in the internal hash table.
It is a very annoying and difficult to debug side effects of mutable objects. Immutable objects avoid it completely.
Failure Atomicity
Here is a simple example:
public class Stack {
private int size;
private String[] items;
public void push(String item) {
size++;
if (size > items.length) {
throw new RuntimeException("stack overflow");
}
items[size] = item;
}
}It is obvious that an object of class Stack will be left in a broken state if it throws a runtime exception on overflow. Its size property will be incremented, while items won’t get a new element.

Immutability prevents this problem. An object will never be left in a broken state because its state is modified only in its constructor. The constructor will either fail, rejecting object instantiation, or succeed, making a valid solid object, which never changes its encapsulated state.
For more on this subject, read Effective Java, 2nd Edition by Joshua Bloch.
Arguments Against Immutability
There are a number of arguments against immutability.
“Immutability is not for enterprise systems”. Very often, I hear people say that immutability is a fancy feature, while absolutely impractical in real enterprise systems. As a counter-argument, I can only show some examples of real-life applications that contain only immutable Java objects:
jcabi-http,jcabi-xml,jcabi-github,jcabi-s3,jcabi-dynamo,jcabi-w3c,jcabi-jdbc,jcabi-simpledb,jcabi-ssh. The above are all Java libraries that work solely with immutable classes/objects. netbout.com and stateful.co are web applications that work solely with immutable objects.“It’s cheaper to update an existing object than create a new one”. Oracle thinks that “The impact of object creation is often overestimated and can be offset by some of the efficiency associated with immutable objects. These include decreased overhead due to garbage collection, and the elimination of code needed to protect mutable objects from corruption.” I agree.
If you have some other arguments, please post them below and I’ll try to comment.
P.S. Check takes.org, a Java web framework that consists entirely of immutable objects.
In object-oriented programming, an object is immutable if its state can’t be modified after it is created. In Java, a good example of an immutable object is String. Once created, we can’t modify its state. We can request that it creates new strings, but its own state will never change.
However, there are not so many immutable classes in JDK. Take, for example, class Date. It is possible to modify its state using setTime().
I don’t know why the JDK designers decided to make these two very similar classes differently. However, I believe that the design of a mutable Date has many flaws, while the immutable String is much more in the spirit of the object-oriented paradigm.
Moreover, I think that all classes should be immutable in a perfect object-oriented world. Unfortunately, sometimes, it is technically not possible due to limitations in JVM. Nevertheless, we should always aim for the best.
This is an incomplete list of arguments in favor of immutability:
- immutable objects are simpler to construct, test, and use
- truly immutable objects are always thread-safe
- they help to avoid temporal coupling
- their usage is side-effect free (no defensive copies)
- identity mutability problem is avoided
- they always have failure atomicity
- they are much easier to cache
- they prevent NULL references, which are bad
Let’s discuss the most important arguments one by one.
Thread Safety
The first and the most obvious argument is that immutable objects are thread-safe. This means that multiple threads can access the same object at the same time, without clashing with another thread.

If no object methods can modify its state, no matter how many of them and how often are being called parallel—they will work in their own memory space in stack.
Goetz et al. explained the advantages of immutable objects in more details in their very famous book Java Concurrency in Practice (highly recommended).
Avoiding Temporal Coupling
Here is an example of temporal coupling (the code makes two consecutive HTTP POST requests, where the second one contains HTTP body):
Request request = new Request("http://localhost");
request.method("POST");
String first = request.fetch();
request.body("text=hello");
String second = request.fetch();This code works. However, you must remember that the first request should be configured before the second one may happen. If we decide to remove the first request from the script, we will remove the second and the third line, and won’t get any errors from the compiler:
Request request = new Request("http://localhost");
// request.method("POST");
// String first = request.fetch();
request.body("text=hello");
String second = request.fetch();Now, the script is broken although it compiled without errors. This is what temporal coupling is about—there is always some hidden information in the code that a programmer has to remember. In this example, we have to remember that the configuration for the first request is also used for the second one.
We have to remember that the second request should always stay together and be executed after the first one.
If Request class were immutable, the first snippet wouldn’t work in the first place, and would have been rewritten like:
final Request request = new Request("");
String first = request.method("POST").fetch();
String second = request.method("POST").body("text=hello").fetch();Now, these two requests are not coupled. We can safely remove the first one, and the second one will still work correctly. You may point out that there is a code duplication. Yes, we should get rid of it and re-write the code:
final Request request = new Request("");
final Request post = request.method("POST");
String first = post.fetch();
String second = post.body("text=hello").fetch();See, refactoring didn’t break anything and we still don’t have temporal coupling. The first request can be removed safely from the code without affecting the second one.
I hope this example demonstrates that the code manipulating immutable objects is more readable and maintainable, because it doesn’t have temporal coupling.
Avoiding Side Effects
Let’s try to use our Request class in a new method (now it is mutable):
public String post(Request request) {
request.method("POST");
return request.fetch();
}Let’s try to make two requests—the first with GET method and the second with POST:
Request request = new Request("http://localhost");
request.method("GET");
String first = this.post(request);
String second = request.fetch();Method post() has a “side effect”—it makes changes to the mutable object request. These changes are not really expected in this case. We expect it to make a POST request and return its body. We don’t want to read its documentation just to find out that behind the scene it also modifies the request we’re passing to it as an argument.
Needless to say, such side effects lead to bugs and maintainability issues. It would be much better to work with an immutable Request:
public String post(Request request) {
return request.method("POST").fetch();
}In this case, we may not have any side effects. Nobody can modify our request object, no matter where it is used and how deep through the call stack it is passed by method calls:
Request request = new Request("http://localhost").method("GET");
String first = this.post(request);
String second = request.fetch();This code is perfectly safe and side effect free.
Avoiding Identity Mutability
Very often, we want objects to be identical if their internal states are the same. Date class is a good example:
Date first = new Date(1L);
Date second = new Date(1L);
assert first.equals(second); // trueThere are two different objects; however, they are equal to each other because their encapsulated states are the same. This is made possible through their custom overloaded implementation of equals() and hashCode() methods.
The consequence of this convenient approach being used with mutable objects is that every time we modify object’s state it changes its identity:
Date first = new Date(1L);
Date second = new Date(1L);
first.setTime(2L);
assert first.equals(second); // falseThis may look natural, until you start using your mutable objects as keys in maps:
Map<Date, String> map = new HashMap<>();
Date date = new Date();
map.put(date, "hello, world!");
date.setTime(12345L);
assert map.containsKey(date); // falseWhen modifying the state of date object, we’re not expecting it to change its identity. We’re not expecting to lose an entry in the map just because the state of its key is changed. However, this is exactly what is happening in the example above.
When we add an object to the map, its hashCode() returns one value. This value is used by HashMap to place the entry into the internal hash table. When we call containsKey() hash code of the object is different (because it is based on its internal state) and HashMap can’t find it in the internal hash table.
It is a very annoying and difficult to debug side effects of mutable objects. Immutable objects avoid it completely.
Failure Atomicity
Here is a simple example:
public class Stack {
private int size;
private String[] items;
public void push(String item) {
size++;
if (size > items.length) {
throw new RuntimeException("stack overflow");
}
items[size] = item;
}
}It is obvious that an object of class Stack will be left in a broken state if it throws a runtime exception on overflow. Its size property will be incremented, while items won’t get a new element.

Immutability prevents this problem. An object will never be left in a broken state because its state is modified only in its constructor. The constructor will either fail, rejecting object instantiation, or succeed, making a valid solid object, which never changes its encapsulated state.
For more on this subject, read Effective Java, 2nd Edition by Joshua Bloch.
Arguments Against Immutability
There are a number of arguments against immutability.
“Immutability is not for enterprise systems”. Very often, I hear people say that immutability is a fancy feature, while absolutely impractical in real enterprise systems. As a counter-argument, I can only show some examples of real-life applications that contain only immutable Java objects:
jcabi-http,jcabi-xml,jcabi-github,jcabi-s3,jcabi-dynamo,jcabi-w3c,jcabi-jdbc,jcabi-simpledb,jcabi-ssh. The above are all Java libraries that work solely with immutable classes/objects. netbout.com and stateful.co are web applications that work solely with immutable objects.“It’s cheaper to update an existing object than create a new one”. Oracle thinks that “The impact of object creation is often overestimated and can be offset by some of the efficiency associated with immutable objects. These include decreased overhead due to garbage collection, and the elimination of code needed to protect mutable objects from corruption.” I agree.
If you have some other arguments, please post them below and I’ll try to comment.
P.S. Check takes.org, a Java web framework that consists entirely of immutable objects.
If you like this article, you will definitely like these very relevant posts too:
Immutable Objects Are Not Dumb
Immutable objects are not the same as passive data structures without setters, despite a very common mis-belief.
How an Immutable Object Can Have State and Behavior?
Object state and behavior are two very different things, and confusing the two often leads to incorrect design.
Gradients of Immutability
There are a few levels and forms of immutability in object-oriented programming, all of which can be used when they seem appropriate.
Please, use syntax highlighting in your comments, to make them more readable.
public Employee getByName(String name) {
int id = database.find(name);
if (id == 0) {
return null;
}
return new Employee(id);
}What is wrong with this method?
It may return NULL instead of an object—that’s what is wrong. NULL is a terrible practice in an object-oriented paradigm and should be avoided at all costs. There have been a number of opinions about this published already, including Null References, The Billion Dollar Mistake presentation by Tony Hoare and the entire Object Thinking book by David West.
Here, I’ll try to summarize all the arguments and show examples of how NULL usage can be avoided and replaced with proper object-oriented constructs.
Basically, there are two possible alternatives to NULL.
The first one is Null Object design pattern (the best way is to make it a constant):
public Employee getByName(String name) {
int id = database.find(name);
if (id == 0) {
return Employee.NOBODY;
}
return Employee(id);
}The second possible alternative is to fail fast by throwing an Exception when you can’t return an object:
public Employee getByName(String name) {
int id = database.find(name);
if (id == 0) {
throw new EmployeeNotFoundException(name);
}
return Employee(id);
}Now, let’s see the arguments against NULL.
Besides Tony Hoare’s presentation and David West’s book mentioned above, I read these publications before writing this post: Clean Code by Robert Martin, Code Complete by Steve McConnell, Say “No” to “Null” by John Sonmez, Is returning null bad design? discussion at StackOverflow.
Ad-hoc Error Handling
Every time you get an object as an input you must check whether it is NULL or a valid object reference. If you forget to check, a NullPointerException (NPE) may break execution in runtime. Thus, your logic becomes polluted with multiple checks and if/then/else forks:
// this is a terrible design, don't reuse
Employee employee = dept.getByName("Jeffrey");
if (employee == null) {
System.out.println("can't find an employee");
System.exit(-1);
} else {
employee.transferTo(dept2);
}This is how exceptional situations are supposed to be handled in C and other imperative procedural languages. OOP introduced exception handling primarily to get rid of these ad-hoc error handling blocks. In OOP, we let exceptions bubble up until they reach an application-wide error handler and our code becomes much cleaner and shorter:
dept.getByName("Jeffrey").transferTo(dept2);Consider NULL references an inheritance of procedural programming, and use 1) Null Objects or 2) Exceptions instead.
Ambiguous Semantic
In order to explicitly convey its meaning, the function getByName() has to be named getByNameOrNullIfNotFound(). The same should happen with every function that returns an object or NULL. Otherwise, ambiguity is inevitable for a code reader. Thus, to keep semantic unambiguous, you should give longer names to functions.
To get rid of this ambiguity, always return a real object, a null object or throw an exception.
Some may argue that we sometimes have to return NULL, for the sake of performance. For example, method get() of interface Map in Java returns NULL when there is no such item in the map:
Employee employee = employees.get("Jeffrey");
if (employee == null) {
throw new EmployeeNotFoundException();
}
return employee;This code searches the map only once due to the usage of NULL in Map. If we would refactor Map so that its method get() will throw an exception if nothing is found, our code will look like this:
if (!employees.containsKey("Jeffrey")) { // first search
throw new EmployeeNotFoundException();
}
return employees.get("Jeffrey"); // second searchObviously, this is method is twice as slow as the first one. What to do?
The Map interface (no offense to its authors) has a design flaw. Its method get() should have been returning an Iterator so that our code would look like:
Iterator found = Map.search("Jeffrey");
if (!found.hasNext()) {
throw new EmployeeNotFoundException();
}
return found.next();BTW, that is exactly how C++ STL map::find() method is designed.
Computer Thinking vs. Object Thinking
Statement if (employee == null) is understood by someone who knows that an object in Java is a pointer to a data structure and that NULL is a pointer to nothing (0x00000000, in Intel x86 processors).
However, if you start thinking as an object, this statement makes much less sense. This is how our code looks from an object point of view:
- Hello, is it a software department?
- Yes.
- Let me talk to your employee "Jeffrey" please.
- Hold the line please...
- Hello.
- Are you NULL?The last question in this conversation sounds weird, doesn’t it?
Instead, if they hang up the phone after our request to speak to Jeffrey, that causes a problem for us (Exception). At that point, we try to call again or inform our supervisor that we can’t reach Jeffrey and complete a bigger transaction.
Alternatively, they may let us speak to another person, who is not Jeffrey, but who can help with most of our questions or refuse to help if we need something “Jeffrey specific” (Null Object).
Slow Failing
Instead of failing fast, the code above attempts to die slowly, killing others on its way. Instead of letting everyone know that something went wrong and that an exception handling should start immediately, it is hiding this failure from its client.
This argument is close to the “ad-hoc error handling” discussed above.
It is a good practice to make your code as fragile as possible, letting it break when necessary.
Make your methods extremely demanding as to the data they manipulate. Let them complain by throwing exceptions, if the provided data provided is not sufficient or simply doesn’t fit with the main usage scenario of the method.
Otherwise, return a Null Object, that exposes some common behavior and throws exceptions on all other calls:
public Employee getByName(String name) {
int id = database.find(name);
Employee employee;
if (id == 0) {
employee = new Employee() {
@Override
public String name() {
return "anonymous";
}
@Override
public void transferTo(Department dept) {
throw new AnonymousEmployeeException(
"I can't be transferred, I'm anonymous"
);
}
};
} else {
employee = Employee(id);
}
return employee;
}Say, you are designing a method findUserByName(), which has to find a user in the database. What would you return if nothing is found? #elegantobjects
--- Yegor Bugayenko (@yegor256) April 29, 2018
Mutable and Incomplete Objects
In general, it is highly recommended to design objects with immutability in mind. This means that an object gets all necessary knowledge during its instantiating and never changes its state during the entire life-cycle.
Very often, NULL values are used in lazy loading, to make objects incomplete and mutable. For example:
public class Department {
private Employee found = null;
public synchronized Employee manager() {
if (this.found == null) {
this.found = new Employee("Jeffrey");
}
return this.found;
}
}This technology, although widely used, is an anti-pattern in OOP. Mostly because it makes an object responsible for performance problems of the computational platform, which is something an Employee object should not be aware of.
Instead of managing a state and exposing its business-relevant behavior, an object has to take care of the caching of its own results—this is what lazy loading is about.
Caching is not something an employee does in the office, does he?
The solution? Don’t use lazy loading in such a primitive way, as in the example above. Instead, move this caching problem to another layer of your application.
For example, in Java, you can use aspect-oriented programming aspects. For example, jcabi-aspects has @Cacheable annotation that caches the value returned by a method:
import com.jcabi.aspects.Cacheable;
public class Department {
@Cacheable(forever = true)
public Employee manager() {
return new Employee("Jacky Brown");
}
}I hope this analysis was convincing enough that you will stop NULL-ing your code :)
public Employee getByName(String name) {
int id = database.find(name);
if (id == 0) {
return null;
}
return new Employee(id);
}What is wrong with this method?
It may return NULL instead of an object—that’s what is wrong. NULL is a terrible practice in an object-oriented paradigm and should be avoided at all costs. There have been a number of opinions about this published already, including Null References, The Billion Dollar Mistake presentation by Tony Hoare and the entire Object Thinking book by David West.
Here, I’ll try to summarize all the arguments and show examples of how NULL usage can be avoided and replaced with proper object-oriented constructs.
Basically, there are two possible alternatives to NULL.
The first one is Null Object design pattern (the best way is to make it a constant):
public Employee getByName(String name) {
int id = database.find(name);
if (id == 0) {
return Employee.NOBODY;
}
return Employee(id);
}The second possible alternative is to fail fast by throwing an Exception when you can’t return an object:
public Employee getByName(String name) {
int id = database.find(name);
if (id == 0) {
throw new EmployeeNotFoundException(name);
}
return Employee(id);
}Now, let’s see the arguments against NULL.
Besides Tony Hoare’s presentation and David West’s book mentioned above, I read these publications before writing this post: Clean Code by Robert Martin, Code Complete by Steve McConnell, Say “No” to “Null” by John Sonmez, Is returning null bad design? discussion at StackOverflow.
Ad-hoc Error Handling
Every time you get an object as an input you must check whether it is NULL or a valid object reference. If you forget to check, a NullPointerException (NPE) may break execution in runtime. Thus, your logic becomes polluted with multiple checks and if/then/else forks:
// this is a terrible design, don't reuse
Employee employee = dept.getByName("Jeffrey");
if (employee == null) {
System.out.println("can't find an employee");
System.exit(-1);
} else {
employee.transferTo(dept2);
}This is how exceptional situations are supposed to be handled in C and other imperative procedural languages. OOP introduced exception handling primarily to get rid of these ad-hoc error handling blocks. In OOP, we let exceptions bubble up until they reach an application-wide error handler and our code becomes much cleaner and shorter:
dept.getByName("Jeffrey").transferTo(dept2);Consider NULL references an inheritance of procedural programming, and use 1) Null Objects or 2) Exceptions instead.
Ambiguous Semantic
In order to explicitly convey its meaning, the function getByName() has to be named getByNameOrNullIfNotFound(). The same should happen with every function that returns an object or NULL. Otherwise, ambiguity is inevitable for a code reader. Thus, to keep semantic unambiguous, you should give longer names to functions.
To get rid of this ambiguity, always return a real object, a null object or throw an exception.
Some may argue that we sometimes have to return NULL, for the sake of performance. For example, method get() of interface Map in Java returns NULL when there is no such item in the map:
Employee employee = employees.get("Jeffrey");
if (employee == null) {
throw new EmployeeNotFoundException();
}
return employee;This code searches the map only once due to the usage of NULL in Map. If we would refactor Map so that its method get() will throw an exception if nothing is found, our code will look like this:
if (!employees.containsKey("Jeffrey")) { // first search
throw new EmployeeNotFoundException();
}
return employees.get("Jeffrey"); // second searchObviously, this is method is twice as slow as the first one. What to do?
The Map interface (no offense to its authors) has a design flaw. Its method get() should have been returning an Iterator so that our code would look like:
Iterator found = Map.search("Jeffrey");
if (!found.hasNext()) {
throw new EmployeeNotFoundException();
}
return found.next();BTW, that is exactly how C++ STL map::find() method is designed.
Computer Thinking vs. Object Thinking
Statement if (employee == null) is understood by someone who knows that an object in Java is a pointer to a data structure and that NULL is a pointer to nothing (0x00000000, in Intel x86 processors).
However, if you start thinking as an object, this statement makes much less sense. This is how our code looks from an object point of view:
- Hello, is it a software department?
- Yes.
- Let me talk to your employee "Jeffrey" please.
- Hold the line please...
- Hello.
- Are you NULL?The last question in this conversation sounds weird, doesn’t it?
Instead, if they hang up the phone after our request to speak to Jeffrey, that causes a problem for us (Exception). At that point, we try to call again or inform our supervisor that we can’t reach Jeffrey and complete a bigger transaction.
Alternatively, they may let us speak to another person, who is not Jeffrey, but who can help with most of our questions or refuse to help if we need something “Jeffrey specific” (Null Object).
Slow Failing
Instead of failing fast, the code above attempts to die slowly, killing others on its way. Instead of letting everyone know that something went wrong and that an exception handling should start immediately, it is hiding this failure from its client.
This argument is close to the “ad-hoc error handling” discussed above.
It is a good practice to make your code as fragile as possible, letting it break when necessary.
Make your methods extremely demanding as to the data they manipulate. Let them complain by throwing exceptions, if the provided data provided is not sufficient or simply doesn’t fit with the main usage scenario of the method.
Otherwise, return a Null Object, that exposes some common behavior and throws exceptions on all other calls:
public Employee getByName(String name) {
int id = database.find(name);
Employee employee;
if (id == 0) {
employee = new Employee() {
@Override
public String name() {
return "anonymous";
}
@Override
public void transferTo(Department dept) {
throw new AnonymousEmployeeException(
"I can't be transferred, I'm anonymous"
);
}
};
} else {
employee = Employee(id);
}
return employee;
}Say, you are designing a method findUserByName(), which has to find a user in the database. What would you return if nothing is found? #elegantobjects
--- Yegor Bugayenko (@yegor256) April 29, 2018
Mutable and Incomplete Objects
In general, it is highly recommended to design objects with immutability in mind. This means that an object gets all necessary knowledge during its instantiating and never changes its state during the entire life-cycle.
Very often, NULL values are used in lazy loading, to make objects incomplete and mutable. For example:
public class Department {
private Employee found = null;
public synchronized Employee manager() {
if (this.found == null) {
this.found = new Employee("Jeffrey");
}
return this.found;
}
}This technology, although widely used, is an anti-pattern in OOP. Mostly because it makes an object responsible for performance problems of the computational platform, which is something an Employee object should not be aware of.
Instead of managing a state and exposing its business-relevant behavior, an object has to take care of the caching of its own results—this is what lazy loading is about.
Caching is not something an employee does in the office, does he?
The solution? Don’t use lazy loading in such a primitive way, as in the example above. Instead, move this caching problem to another layer of your application.
For example, in Java, you can use aspect-oriented programming aspects. For example, jcabi-aspects has @Cacheable annotation that caches the value returned by a method:
import com.jcabi.aspects.Cacheable;
public class Department {
@Cacheable(forever = true)
public Employee manager() {
return new Employee("Jacky Brown");
}
}I hope this analysis was convincing enough that you will stop NULL-ing your code :)
https://www.yegor256.com/2014/05/13/why-null-is-bad.html
Why NULL is Bad?
- Yegor Bugayenko
- comments
- Translated:
- Japanese
- add yours!
A simple example of NULL usage in Java:
public Employee getByName(String name) {
int id = database.find(name);
if (id == 0) {
return null;
}
return new Employee(id);
}What is wrong with this method?
It may return NULL instead of an object—that’s what is wrong. NULL is a terrible practice in an object-oriented paradigm and should be avoided at all costs. There have been a number of opinions about this published already, including Null References, The Billion Dollar Mistake presentation by Tony Hoare and the entire Object Thinking book by David West.
Here, I’ll try to summarize all the arguments and show examples of how NULL usage can be avoided and replaced with proper object-oriented constructs.
Basically, there are two possible alternatives to NULL.
The first one is Null Object design pattern (the best way is to make it a constant):
public Employee getByName(String name) {
int id = database.find(name);
if (id == 0) {
return Employee.NOBODY;
}
return Employee(id);
}The second possible alternative is to fail fast by throwing an Exception when you can’t return an object:
public Employee getByName(String name) {
int id = database.find(name);
if (id == 0) {
throw new EmployeeNotFoundException(name);
}
return Employee(id);
}Now, let’s see the arguments against NULL.
Besides Tony Hoare’s presentation and David West’s book mentioned above, I read these publications before writing this post: Clean Code by Robert Martin, Code Complete by Steve McConnell, Say “No” to “Null” by John Sonmez, Is returning null bad design? discussion at StackOverflow.
Ad-hoc Error Handling
Every time you get an object as an input you must check whether it is NULL or a valid object reference. If you forget to check, a NullPointerException (NPE) may break execution in runtime. Thus, your logic becomes polluted with multiple checks and if/then/else forks:
// this is a terrible design, don't reuse
Employee employee = dept.getByName("Jeffrey");
if (employee == null) {
System.out.println("can't find an employee");
System.exit(-1);
} else {
employee.transferTo(dept2);
}This is how exceptional situations are supposed to be handled in C and other imperative procedural languages. OOP introduced exception handling primarily to get rid of these ad-hoc error handling blocks. In OOP, we let exceptions bubble up until they reach an application-wide error handler and our code becomes much cleaner and shorter:
dept.getByName("Jeffrey").transferTo(dept2);Consider NULL references an inheritance of procedural programming, and use 1) Null Objects or 2) Exceptions instead.
Ambiguous Semantic
In order to explicitly convey its meaning, the function getByName() has to be named getByNameOrNullIfNotFound(). The same should happen with every function that returns an object or NULL. Otherwise, ambiguity is inevitable for a code reader. Thus, to keep semantic unambiguous, you should give longer names to functions.
To get rid of this ambiguity, always return a real object, a null object or throw an exception.
Some may argue that we sometimes have to return NULL, for the sake of performance. For example, method get() of interface Map in Java returns NULL when there is no such item in the map:
Employee employee = employees.get("Jeffrey");
if (employee == null) {
throw new EmployeeNotFoundException();
}
return employee;This code searches the map only once due to the usage of NULL in Map. If we would refactor Map so that its method get() will throw an exception if nothing is found, our code will look like this:
if (!employees.containsKey("Jeffrey")) { // first search
throw new EmployeeNotFoundException();
}
return employees.get("Jeffrey"); // second searchObviously, this is method is twice as slow as the first one. What to do?
The Map interface (no offense to its authors) has a design flaw. Its method get() should have been returning an Iterator so that our code would look like:
Iterator found = Map.search("Jeffrey");
if (!found.hasNext()) {
throw new EmployeeNotFoundException();
}
return found.next();BTW, that is exactly how C++ STL map::find() method is designed.
Computer Thinking vs. Object Thinking
Statement if (employee == null) is understood by someone who knows that an object in Java is a pointer to a data structure and that NULL is a pointer to nothing (0x00000000, in Intel x86 processors).
However, if you start thinking as an object, this statement makes much less sense. This is how our code looks from an object point of view:
- Hello, is it a software department?
- Yes.
- Let me talk to your employee "Jeffrey" please.
- Hold the line please...
- Hello.
- Are you NULL?The last question in this conversation sounds weird, doesn’t it?
Instead, if they hang up the phone after our request to speak to Jeffrey, that causes a problem for us (Exception). At that point, we try to call again or inform our supervisor that we can’t reach Jeffrey and complete a bigger transaction.
Alternatively, they may let us speak to another person, who is not Jeffrey, but who can help with most of our questions or refuse to help if we need something “Jeffrey specific” (Null Object).
Slow Failing
Instead of failing fast, the code above attempts to die slowly, killing others on its way. Instead of letting everyone know that something went wrong and that an exception handling should start immediately, it is hiding this failure from its client.
This argument is close to the “ad-hoc error handling” discussed above.
It is a good practice to make your code as fragile as possible, letting it break when necessary.
Make your methods extremely demanding as to the data they manipulate. Let them complain by throwing exceptions, if the provided data provided is not sufficient or simply doesn’t fit with the main usage scenario of the method.
Otherwise, return a Null Object, that exposes some common behavior and throws exceptions on all other calls:
public Employee getByName(String name) {
int id = database.find(name);
Employee employee;
if (id == 0) {
employee = new Employee() {
@Override
public String name() {
return "anonymous";
}
@Override
public void transferTo(Department dept) {
throw new AnonymousEmployeeException(
"I can't be transferred, I'm anonymous"
);
}
};
} else {
employee = Employee(id);
}
return employee;
}Say, you are designing a method findUserByName(), which has to find a user in the database. What would you return if nothing is found? #elegantobjects
--- Yegor Bugayenko (@yegor256) April 29, 2018
Mutable and Incomplete Objects
In general, it is highly recommended to design objects with immutability in mind. This means that an object gets all necessary knowledge during its instantiating and never changes its state during the entire life-cycle.
Very often, NULL values are used in lazy loading, to make objects incomplete and mutable. For example:
public class Department {
private Employee found = null;
public synchronized Employee manager() {
if (this.found == null) {
this.found = new Employee("Jeffrey");
}
return this.found;
}
}This technology, although widely used, is an anti-pattern in OOP. Mostly because it makes an object responsible for performance problems of the computational platform, which is something an Employee object should not be aware of.
Instead of managing a state and exposing its business-relevant behavior, an object has to take care of the caching of its own results—this is what lazy loading is about.
Caching is not something an employee does in the office, does he?
The solution? Don’t use lazy loading in such a primitive way, as in the example above. Instead, move this caching problem to another layer of your application.
For example, in Java, you can use aspect-oriented programming aspects. For example, jcabi-aspects has @Cacheable annotation that caches the value returned by a method:
import com.jcabi.aspects.Cacheable;
public class Department {
@Cacheable(forever = true)
public Employee manager() {
return new Employee("Jacky Brown");
}
}I hope this analysis was convincing enough that you will stop NULL-ing your code :)
A simple example of NULL usage in Java:
public Employee getByName(String name) {
int id = database.find(name);
if (id == 0) {
return null;
}
return new Employee(id);
}What is wrong with this method?
It may return NULL instead of an object—that’s what is wrong. NULL is a terrible practice in an object-oriented paradigm and should be avoided at all costs. There have been a number of opinions about this published already, including Null References, The Billion Dollar Mistake presentation by Tony Hoare and the entire Object Thinking book by David West.
Here, I’ll try to summarize all the arguments and show examples of how NULL usage can be avoided and replaced with proper object-oriented constructs.
Basically, there are two possible alternatives to NULL.
The first one is Null Object design pattern (the best way is to make it a constant):
public Employee getByName(String name) {
int id = database.find(name);
if (id == 0) {
return Employee.NOBODY;
}
return Employee(id);
}The second possible alternative is to fail fast by throwing an Exception when you can’t return an object:
public Employee getByName(String name) {
int id = database.find(name);
if (id == 0) {
throw new EmployeeNotFoundException(name);
}
return Employee(id);
}Now, let’s see the arguments against NULL.
Besides Tony Hoare’s presentation and David West’s book mentioned above, I read these publications before writing this post: Clean Code by Robert Martin, Code Complete by Steve McConnell, Say “No” to “Null” by John Sonmez, Is returning null bad design? discussion at StackOverflow.
Ad-hoc Error Handling
Every time you get an object as an input you must check whether it is NULL or a valid object reference. If you forget to check, a NullPointerException (NPE) may break execution in runtime. Thus, your logic becomes polluted with multiple checks and if/then/else forks:
// this is a terrible design, don't reuse
Employee employee = dept.getByName("Jeffrey");
if (employee == null) {
System.out.println("can't find an employee");
System.exit(-1);
} else {
employee.transferTo(dept2);
}This is how exceptional situations are supposed to be handled in C and other imperative procedural languages. OOP introduced exception handling primarily to get rid of these ad-hoc error handling blocks. In OOP, we let exceptions bubble up until they reach an application-wide error handler and our code becomes much cleaner and shorter:
dept.getByName("Jeffrey").transferTo(dept2);Consider NULL references an inheritance of procedural programming, and use 1) Null Objects or 2) Exceptions instead.
Ambiguous Semantic
In order to explicitly convey its meaning, the function getByName() has to be named getByNameOrNullIfNotFound(). The same should happen with every function that returns an object or NULL. Otherwise, ambiguity is inevitable for a code reader. Thus, to keep semantic unambiguous, you should give longer names to functions.
To get rid of this ambiguity, always return a real object, a null object or throw an exception.
Some may argue that we sometimes have to return NULL, for the sake of performance. For example, method get() of interface Map in Java returns NULL when there is no such item in the map:
Employee employee = employees.get("Jeffrey");
if (employee == null) {
throw new EmployeeNotFoundException();
}
return employee;This code searches the map only once due to the usage of NULL in Map. If we would refactor Map so that its method get() will throw an exception if nothing is found, our code will look like this:
if (!employees.containsKey("Jeffrey")) { // first search
throw new EmployeeNotFoundException();
}
return employees.get("Jeffrey"); // second searchObviously, this is method is twice as slow as the first one. What to do?
The Map interface (no offense to its authors) has a design flaw. Its method get() should have been returning an Iterator so that our code would look like:
Iterator found = Map.search("Jeffrey");
if (!found.hasNext()) {
throw new EmployeeNotFoundException();
}
return found.next();BTW, that is exactly how C++ STL map::find() method is designed.
Computer Thinking vs. Object Thinking
Statement if (employee == null) is understood by someone who knows that an object in Java is a pointer to a data structure and that NULL is a pointer to nothing (0x00000000, in Intel x86 processors).
However, if you start thinking as an object, this statement makes much less sense. This is how our code looks from an object point of view:
- Hello, is it a software department?
- Yes.
- Let me talk to your employee "Jeffrey" please.
- Hold the line please...
- Hello.
- Are you NULL?The last question in this conversation sounds weird, doesn’t it?
Instead, if they hang up the phone after our request to speak to Jeffrey, that causes a problem for us (Exception). At that point, we try to call again or inform our supervisor that we can’t reach Jeffrey and complete a bigger transaction.
Alternatively, they may let us speak to another person, who is not Jeffrey, but who can help with most of our questions or refuse to help if we need something “Jeffrey specific” (Null Object).
Slow Failing
Instead of failing fast, the code above attempts to die slowly, killing others on its way. Instead of letting everyone know that something went wrong and that an exception handling should start immediately, it is hiding this failure from its client.
This argument is close to the “ad-hoc error handling” discussed above.
It is a good practice to make your code as fragile as possible, letting it break when necessary.
Make your methods extremely demanding as to the data they manipulate. Let them complain by throwing exceptions, if the provided data provided is not sufficient or simply doesn’t fit with the main usage scenario of the method.
Otherwise, return a Null Object, that exposes some common behavior and throws exceptions on all other calls:
public Employee getByName(String name) {
int id = database.find(name);
Employee employee;
if (id == 0) {
employee = new Employee() {
@Override
public String name() {
return "anonymous";
}
@Override
public void transferTo(Department dept) {
throw new AnonymousEmployeeException(
"I can't be transferred, I'm anonymous"
);
}
};
} else {
employee = Employee(id);
}
return employee;
}Say, you are designing a method findUserByName(), which has to find a user in the database. What would you return if nothing is found? #elegantobjects
--- Yegor Bugayenko (@yegor256) April 29, 2018
Mutable and Incomplete Objects
In general, it is highly recommended to design objects with immutability in mind. This means that an object gets all necessary knowledge during its instantiating and never changes its state during the entire life-cycle.
Very often, NULL values are used in lazy loading, to make objects incomplete and mutable. For example:
public class Department {
private Employee found = null;
public synchronized Employee manager() {
if (this.found == null) {
this.found = new Employee("Jeffrey");
}
return this.found;
}
}This technology, although widely used, is an anti-pattern in OOP. Mostly because it makes an object responsible for performance problems of the computational platform, which is something an Employee object should not be aware of.
Instead of managing a state and exposing its business-relevant behavior, an object has to take care of the caching of its own results—this is what lazy loading is about.
Caching is not something an employee does in the office, does he?
The solution? Don’t use lazy loading in such a primitive way, as in the example above. Instead, move this caching problem to another layer of your application.
For example, in Java, you can use aspect-oriented programming aspects. For example, jcabi-aspects has @Cacheable annotation that caches the value returned by a method:
import com.jcabi.aspects.Cacheable;
public class Department {
@Cacheable(forever = true)
public Employee manager() {
return new Employee("Jacky Brown");
}
}I hope this analysis was convincing enough that you will stop NULL-ing your code :)
Please, use syntax highlighting in your comments, to make them more readable.
Iterables and Iterators from Guava, and Files from JDK7 are perfect examples of utility classes.This design idea is very popular in the Java world (as well as C#, Ruby, etc.) because utility classes provide common functionality used everywhere.
Here, we want to follow the DRY principle and avoid duplication. Therefore, we place common code blocks into utility classes and reuse them when necessary:
// This is a terrible design, don't reuse
public class NumberUtils {
public static int max(int a, int b) {
return a > b ? a : b;
}
}Indeed, this a very convenient technique!?
Utility Classes Are Evil
However, in an object-oriented world, utility classes are considered a very bad (some even may say “terrible”) practice.
There have been many discussions of this subject; to name a few: Are Helper Classes Evil? by Nick Malik, Why helper, singletons and utility classes are mostly bad by Simon Hart, Avoiding Utility Classes by Marshal Ward, Kill That Util Class! by Dhaval Dalal, Helper Classes Are A Code Smell by Rob Bagby.
Additionally, there are a few questions on StackExchange about utility classes: If a “Utilities” class is evil, where do I put my generic code?, Utility Classes are Evil.
A dry summary of all their arguments is that utility classes are not proper objects; therefore, they don’t fit into object-oriented world. They were inherited from procedural programming, mostly because we were used to a functional decomposition paradigm back then.
Assuming you agree with the arguments and want to stop using utility classes, I’ll show by example how these creatures can be replaced with proper objects.
Procedural Example
Say, for instance, you want to read a text file, split it into lines, trim every line and then save the results in another file. This is can be done with FileUtils from Apache Commons:
void transform(File in, File out) {
Collection<String> src = FileUtils.readLines(in, "UTF-8");
Collection<String> dest = new ArrayList<>(src.size());
for (String line : src) {
dest.add(line.trim());
}
FileUtils.writeLines(out, dest, "UTF-8");
}The above code may look clean; however, this is procedural programming, not object-oriented. We are manipulating data (bytes and bits) and explicitly instructing the computer from where to retrieve them and then where to put them on every single line of code. We’re defining a procedure of execution.
Object-Oriented Alternative
In an object-oriented paradigm, we should instantiate and compose objects, thus letting them manage data when and how they desire. Instead of calling supplementary static functions, we should create objects that are capable of exposing the behavior we are seeking:
public class Max implements Number {
private final int a;
private final int b;
public Max(int x, int y) {
this.a = x;
this.b = y;
}
@Override
public int intValue() {
return this.a > this.b ? this.a : this.b;
}
}This procedural call:
int max = NumberUtils.max(10, 5);Will become object-oriented:
int max = new Max(10, 5).intValue();Potato, potato? Not really; just read on…
Objects Instead of Data Structures
This is how I would design the same file-transforming functionality as above but in an object-oriented manner:
void transform(File in, File out) {
Collection<String> src = new Trimmed(
new FileLines(new UnicodeFile(in))
);
Collection<String> dest = new FileLines(
new UnicodeFile(out)
);
dest.addAll(src);
}FileLines implements Collection<String> and encapsulates all file reading and writing operations. An instance of FileLines behaves exactly as a collection of strings and hides all I/O operations. When we iterate it—a file is being read. When we addAll() to it—a file is being written.
Trimmed also implements Collection<String> and encapsulates a collection of strings (Decorator pattern). Every time the next line is retrieved, it gets trimmed.
All classes taking participation in the snippet are rather small: Trimmed, FileLines, and UnicodeFile. Each of them is responsible for its own single feature, thus following perfectly the single responsibility principle.
On our side, as users of the library, this may be not so important, but for their developers it is an imperative. It is much easier to develop, maintain and unit-test class FileLines rather than using a readLines() method in a 80+ methods and 3000 lines utility class FileUtils. Seriously, look at its source code.
An object-oriented approach enables lazy execution. The in file is not read until its data is required. If we fail to open out due to some I/O error, the first file won’t even be touched. The whole show starts only after we call addAll().
All lines in the second snippet, except the last one, instantiate and compose smaller objects into bigger ones. This object composition is rather cheap for the CPU since it doesn’t cause any data transformations.
Besides that, it is obvious that the second script runs in O(1) space, while the first one executes in O(n). This is the consequence of our procedural approach to data in the first script.
In an object-oriented world, there is no data; there are only objects and their behavior!
" /> Apache Commons;Iterables and Iterators from Guava, and Files from JDK7 are perfect examples of utility classes.This design idea is very popular in the Java world (as well as C#, Ruby, etc.) because utility classes provide common functionality used everywhere.
Here, we want to follow the DRY principle and avoid duplication. Therefore, we place common code blocks into utility classes and reuse them when necessary:
// This is a terrible design, don't reuse
public class NumberUtils {
public static int max(int a, int b) {
return a > b ? a : b;
}
}Indeed, this a very convenient technique!?
Utility Classes Are Evil
However, in an object-oriented world, utility classes are considered a very bad (some even may say “terrible”) practice.
There have been many discussions of this subject; to name a few: Are Helper Classes Evil? by Nick Malik, Why helper, singletons and utility classes are mostly bad by Simon Hart, Avoiding Utility Classes by Marshal Ward, Kill That Util Class! by Dhaval Dalal, Helper Classes Are A Code Smell by Rob Bagby.
Additionally, there are a few questions on StackExchange about utility classes: If a “Utilities” class is evil, where do I put my generic code?, Utility Classes are Evil.
A dry summary of all their arguments is that utility classes are not proper objects; therefore, they don’t fit into object-oriented world. They were inherited from procedural programming, mostly because we were used to a functional decomposition paradigm back then.
Assuming you agree with the arguments and want to stop using utility classes, I’ll show by example how these creatures can be replaced with proper objects.
Procedural Example
Say, for instance, you want to read a text file, split it into lines, trim every line and then save the results in another file. This is can be done with FileUtils from Apache Commons:
void transform(File in, File out) {
Collection<String> src = FileUtils.readLines(in, "UTF-8");
Collection<String> dest = new ArrayList<>(src.size());
for (String line : src) {
dest.add(line.trim());
}
FileUtils.writeLines(out, dest, "UTF-8");
}The above code may look clean; however, this is procedural programming, not object-oriented. We are manipulating data (bytes and bits) and explicitly instructing the computer from where to retrieve them and then where to put them on every single line of code. We’re defining a procedure of execution.
Object-Oriented Alternative
In an object-oriented paradigm, we should instantiate and compose objects, thus letting them manage data when and how they desire. Instead of calling supplementary static functions, we should create objects that are capable of exposing the behavior we are seeking:
public class Max implements Number {
private final int a;
private final int b;
public Max(int x, int y) {
this.a = x;
this.b = y;
}
@Override
public int intValue() {
return this.a > this.b ? this.a : this.b;
}
}This procedural call:
int max = NumberUtils.max(10, 5);Will become object-oriented:
int max = new Max(10, 5).intValue();Potato, potato? Not really; just read on…
Objects Instead of Data Structures
This is how I would design the same file-transforming functionality as above but in an object-oriented manner:
void transform(File in, File out) {
Collection<String> src = new Trimmed(
new FileLines(new UnicodeFile(in))
);
Collection<String> dest = new FileLines(
new UnicodeFile(out)
);
dest.addAll(src);
}FileLines implements Collection<String> and encapsulates all file reading and writing operations. An instance of FileLines behaves exactly as a collection of strings and hides all I/O operations. When we iterate it—a file is being read. When we addAll() to it—a file is being written.
Trimmed also implements Collection<String> and encapsulates a collection of strings (Decorator pattern). Every time the next line is retrieved, it gets trimmed.
All classes taking participation in the snippet are rather small: Trimmed, FileLines, and UnicodeFile. Each of them is responsible for its own single feature, thus following perfectly the single responsibility principle.
On our side, as users of the library, this may be not so important, but for their developers it is an imperative. It is much easier to develop, maintain and unit-test class FileLines rather than using a readLines() method in a 80+ methods and 3000 lines utility class FileUtils. Seriously, look at its source code.
An object-oriented approach enables lazy execution. The in file is not read until its data is required. If we fail to open out due to some I/O error, the first file won’t even be touched. The whole show starts only after we call addAll().
All lines in the second snippet, except the last one, instantiate and compose smaller objects into bigger ones. This object composition is rather cheap for the CPU since it doesn’t cause any data transformations.
Besides that, it is obvious that the second script runs in O(1) space, while the first one executes in O(n). This is the consequence of our procedural approach to data in the first script.
In an object-oriented world, there is no data; there are only objects and their behavior!
"/>
https://www.yegor256.com/2014/05/05/oop-alternative-to-utility-classes.html
OOP Alternative to Utility Classes
- Yegor Bugayenko
- comments
- Translated:
- Japanese
- Russian
- add yours!
A utility class (aka helper class) is a “structure” that has only static methods and encapsulates no state. StringUtils, IOUtils, FileUtils from Apache Commons; Iterables and Iterators from Guava, and Files from JDK7 are perfect examples of utility classes.
This design idea is very popular in the Java world (as well as C#, Ruby, etc.) because utility classes provide common functionality used everywhere.
Here, we want to follow the DRY principle and avoid duplication. Therefore, we place common code blocks into utility classes and reuse them when necessary:
// This is a terrible design, don't reuse
public class NumberUtils {
public static int max(int a, int b) {
return a > b ? a : b;
}
}Indeed, this a very convenient technique!?
Utility Classes Are Evil
However, in an object-oriented world, utility classes are considered a very bad (some even may say “terrible”) practice.
There have been many discussions of this subject; to name a few: Are Helper Classes Evil? by Nick Malik, Why helper, singletons and utility classes are mostly bad by Simon Hart, Avoiding Utility Classes by Marshal Ward, Kill That Util Class! by Dhaval Dalal, Helper Classes Are A Code Smell by Rob Bagby.
Additionally, there are a few questions on StackExchange about utility classes: If a “Utilities” class is evil, where do I put my generic code?, Utility Classes are Evil.
A dry summary of all their arguments is that utility classes are not proper objects; therefore, they don’t fit into object-oriented world. They were inherited from procedural programming, mostly because we were used to a functional decomposition paradigm back then.
Assuming you agree with the arguments and want to stop using utility classes, I’ll show by example how these creatures can be replaced with proper objects.
Procedural Example
Say, for instance, you want to read a text file, split it into lines, trim every line and then save the results in another file. This is can be done with FileUtils from Apache Commons:
void transform(File in, File out) {
Collection<String> src = FileUtils.readLines(in, "UTF-8");
Collection<String> dest = new ArrayList<>(src.size());
for (String line : src) {
dest.add(line.trim());
}
FileUtils.writeLines(out, dest, "UTF-8");
}The above code may look clean; however, this is procedural programming, not object-oriented. We are manipulating data (bytes and bits) and explicitly instructing the computer from where to retrieve them and then where to put them on every single line of code. We’re defining a procedure of execution.
Object-Oriented Alternative
In an object-oriented paradigm, we should instantiate and compose objects, thus letting them manage data when and how they desire. Instead of calling supplementary static functions, we should create objects that are capable of exposing the behavior we are seeking:
public class Max implements Number {
private final int a;
private final int b;
public Max(int x, int y) {
this.a = x;
this.b = y;
}
@Override
public int intValue() {
return this.a > this.b ? this.a : this.b;
}
}This procedural call:
int max = NumberUtils.max(10, 5);Will become object-oriented:
int max = new Max(10, 5).intValue();Potato, potato? Not really; just read on…
Objects Instead of Data Structures
This is how I would design the same file-transforming functionality as above but in an object-oriented manner:
void transform(File in, File out) {
Collection<String> src = new Trimmed(
new FileLines(new UnicodeFile(in))
);
Collection<String> dest = new FileLines(
new UnicodeFile(out)
);
dest.addAll(src);
}FileLines implements Collection<String> and encapsulates all file reading and writing operations. An instance of FileLines behaves exactly as a collection of strings and hides all I/O operations. When we iterate it—a file is being read. When we addAll() to it—a file is being written.
Trimmed also implements Collection<String> and encapsulates a collection of strings (Decorator pattern). Every time the next line is retrieved, it gets trimmed.
All classes taking participation in the snippet are rather small: Trimmed, FileLines, and UnicodeFile. Each of them is responsible for its own single feature, thus following perfectly the single responsibility principle.
On our side, as users of the library, this may be not so important, but for their developers it is an imperative. It is much easier to develop, maintain and unit-test class FileLines rather than using a readLines() method in a 80+ methods and 3000 lines utility class FileUtils. Seriously, look at its source code.
An object-oriented approach enables lazy execution. The in file is not read until its data is required. If we fail to open out due to some I/O error, the first file won’t even be touched. The whole show starts only after we call addAll().
All lines in the second snippet, except the last one, instantiate and compose smaller objects into bigger ones. This object composition is rather cheap for the CPU since it doesn’t cause any data transformations.
Besides that, it is obvious that the second script runs in O(1) space, while the first one executes in O(n). This is the consequence of our procedural approach to data in the first script.
In an object-oriented world, there is no data; there are only objects and their behavior!
A utility class (aka helper class) is a “structure” that has only static methods and encapsulates no state. StringUtils, IOUtils, FileUtils from Apache Commons; Iterables and Iterators from Guava, and Files from JDK7 are perfect examples of utility classes.
This design idea is very popular in the Java world (as well as C#, Ruby, etc.) because utility classes provide common functionality used everywhere.
Here, we want to follow the DRY principle and avoid duplication. Therefore, we place common code blocks into utility classes and reuse them when necessary:
// This is a terrible design, don't reuse
public class NumberUtils {
public static int max(int a, int b) {
return a > b ? a : b;
}
}Indeed, this a very convenient technique!?
Utility Classes Are Evil
However, in an object-oriented world, utility classes are considered a very bad (some even may say “terrible”) practice.
There have been many discussions of this subject; to name a few: Are Helper Classes Evil? by Nick Malik, Why helper, singletons and utility classes are mostly bad by Simon Hart, Avoiding Utility Classes by Marshal Ward, Kill That Util Class! by Dhaval Dalal, Helper Classes Are A Code Smell by Rob Bagby.
Additionally, there are a few questions on StackExchange about utility classes: If a “Utilities” class is evil, where do I put my generic code?, Utility Classes are Evil.
A dry summary of all their arguments is that utility classes are not proper objects; therefore, they don’t fit into object-oriented world. They were inherited from procedural programming, mostly because we were used to a functional decomposition paradigm back then.
Assuming you agree with the arguments and want to stop using utility classes, I’ll show by example how these creatures can be replaced with proper objects.
Procedural Example
Say, for instance, you want to read a text file, split it into lines, trim every line and then save the results in another file. This is can be done with FileUtils from Apache Commons:
void transform(File in, File out) {
Collection<String> src = FileUtils.readLines(in, "UTF-8");
Collection<String> dest = new ArrayList<>(src.size());
for (String line : src) {
dest.add(line.trim());
}
FileUtils.writeLines(out, dest, "UTF-8");
}The above code may look clean; however, this is procedural programming, not object-oriented. We are manipulating data (bytes and bits) and explicitly instructing the computer from where to retrieve them and then where to put them on every single line of code. We’re defining a procedure of execution.
Object-Oriented Alternative
In an object-oriented paradigm, we should instantiate and compose objects, thus letting them manage data when and how they desire. Instead of calling supplementary static functions, we should create objects that are capable of exposing the behavior we are seeking:
public class Max implements Number {
private final int a;
private final int b;
public Max(int x, int y) {
this.a = x;
this.b = y;
}
@Override
public int intValue() {
return this.a > this.b ? this.a : this.b;
}
}This procedural call:
int max = NumberUtils.max(10, 5);Will become object-oriented:
int max = new Max(10, 5).intValue();Potato, potato? Not really; just read on…
Objects Instead of Data Structures
This is how I would design the same file-transforming functionality as above but in an object-oriented manner:
void transform(File in, File out) {
Collection<String> src = new Trimmed(
new FileLines(new UnicodeFile(in))
);
Collection<String> dest = new FileLines(
new UnicodeFile(out)
);
dest.addAll(src);
}FileLines implements Collection<String> and encapsulates all file reading and writing operations. An instance of FileLines behaves exactly as a collection of strings and hides all I/O operations. When we iterate it—a file is being read. When we addAll() to it—a file is being written.
Trimmed also implements Collection<String> and encapsulates a collection of strings (Decorator pattern). Every time the next line is retrieved, it gets trimmed.
All classes taking participation in the snippet are rather small: Trimmed, FileLines, and UnicodeFile. Each of them is responsible for its own single feature, thus following perfectly the single responsibility principle.
On our side, as users of the library, this may be not so important, but for their developers it is an imperative. It is much easier to develop, maintain and unit-test class FileLines rather than using a readLines() method in a 80+ methods and 3000 lines utility class FileUtils. Seriously, look at its source code.
An object-oriented approach enables lazy execution. The in file is not read until its data is required. If we fail to open out due to some I/O error, the first file won’t even be touched. The whole show starts only after we call addAll().
All lines in the second snippet, except the last one, instantiate and compose smaller objects into bigger ones. This object composition is rather cheap for the CPU since it doesn’t cause any data transformations.
Besides that, it is obvious that the second script runs in O(1) space, while the first one executes in O(n). This is the consequence of our procedural approach to data in the first script.
In an object-oriented world, there is no data; there are only objects and their behavior!
Please, use syntax highlighting in your comments, to make them more readable.